404 research outputs found

    Beyond element-wise interactions: identifying complex interactions in biological processes

    Get PDF
    Background: Biological processes typically involve the interactions of a number of elements (genes, cells) acting on each others. Such processes are often modelled as networks whose nodes are the elements in question and edges pairwise relations between them (transcription, inhibition). But more often than not, elements actually work cooperatively or competitively to achieve a task. Or an element can act on the interaction between two others, as in the case of an enzyme controlling a reaction rate. We call “complex” these types of interaction and propose ways to identify them from time-series observations. Methodology: We use Granger Causality, a measure of the interaction between two signals, to characterize the influence of an enzyme on a reaction rate. We extend its traditional formulation to the case of multi-dimensional signals in order to capture group interactions, and not only element interactions. Our method is extensively tested on simulated data and applied to three biological datasets: microarray data of the Saccharomyces cerevisiae yeast, local field potential recordings of two brain areas and a metabolic reaction. Conclusions: Our results demonstrate that complex Granger causality can reveal new types of relation between signals and is particularly suited to biological data. Our approach raises some fundamental issues of the systems biology approach since finding all complex causalities (interactions) is an NP hard problem

    How do we get there? Effects of cognitive aging on route memory

    Get PDF
    © 2017 The Author(s) Research into the effects of cognitive aging on route navigation usually focuses on differences in learning performance. In contrast, we investigated age-related differences in route knowledge after successful route learning. One young and two groups of older adults categorized using different cut-off scores on the Montreal Cognitive Assessment (MoCA), were trained until they could correctly recall short routes. During the test phase, they were asked to recall the sequence in which landmarks were encountered (Landmark Sequence Task), the sequence of turns (Direction Sequence Task), the direction of turn at each landmark (Landmark Direction Task), and to identify the learned routes from a map perspective (Perspective Taking Task). Comparing the young participant group with the older group that scored high on the MoCA, we found effects of typical aging in learning performance and in the Direction Sequence Task. Comparing the two older groups, we found effects of early signs of atypical aging in the Landmark Direction and the Perspective Taking Tasks. We found no differences between groups in the Landmark Sequence Task. Given that participants were able to recall routes after training, these results suggest that typical and early signs of atypical aging result in differential memory deficits for aspects of route knowledge

    Management of Lung Nodules and Lung Cancer Screening During the COVID-19 Pandemic: CHEST Expert Panel Report

    Get PDF
    Background: The risks from potential exposure to coronavirus disease 2019 (COVID-19), and resource reallocation that has occurred to combat the pandemic, have altered the balance of benefits and harms that informed current (pre-COVID-19) guideline recommendations for lung cancer screening and lung nodule evaluation. Consensus statements were developed to guide clinicians managing lung cancer screening programs and patients with lung nodules during the COVID-19 pandemic. / Methods: An expert panel of 24 members, including pulmonologists (n = 17), thoracic radiologists (n = 5), and thoracic surgeons (n = 2), was formed. The panel was provided with an overview of current evidence, summarized by recent guidelines related to lung cancer screening and lung nodule evaluation. The panel was convened by video teleconference to discuss and then vote on statements related to 12 common clinical scenarios. A predefined threshold of 70% of panel members voting agree or strongly agree was used to determine if there was a consensus for each statement. Items that may influence decisions were listed as notes to be considered for each scenario. / Results: Twelve statements related to baseline and annual lung cancer screening (n = 2), surveillance of a previously detected lung nodule (n = 5), evaluation of intermediate and high-risk lung nodules (n = 4), and management of clinical stage I non–small-cell lung cancer (n = 1) were developed and modified. All 12 statements were confirmed as consensus statements according to the voting results. The consensus statements provide guidance about situations in which it was believed to be appropriate to delay screening, defer surveillance imaging of lung nodules, and minimize nonurgent interventions during the evaluation of lung nodules and stage I non–small-cell lung cancer. / Conclusions: There was consensus that during the COVID-19 pandemic, it is appropriate to defer enrollment in lung cancer screening and modify the evaluation of lung nodules due to the added risks from potential exposure and the need for resource reallocation. There are multiple local, regional, and patient-related factors that should be considered when applying these statements to individual patient care

    Glucose variability measures and their effect on mortality: a systematic review

    Get PDF
    Objective: To systematically review the medical literature on the association between glucose variability measures and mortality in critically ill patients. Methods: Studies assessing the association between a measure of glucose variability and mortality that reported original data from a clinical trial or observational study on critically ill adult patients were searched in Ovid MEDLINE (R) and Ovid EMBASE (R). Data on patient populations, study designs, glucose regulations, statistical approaches, outcome measures, and glucose variability indicators (their definition and applicability) were extracted. Result: Twelve studies met the inclusion criteria; 13 different indicators were used to measure glucose variability. Standard deviation and the presence of both hypo-and hyperglycemia were the most common indicators. All studies reported a statistically significant association between mortality and at least one glucose variability indicator. In four studies both blood glucose levels and severity of illness were considered as confounders, but only one of them checked model assumptions to assert inference validity. Conclusions: Glucose variability has been quantified in many different ways, and in each study at least one of them appeared to be associated with mortality. Because of methodological limitations and the possibility of reporting bias, it is still unsettled whether and in which quantification this association is independent of other confounders. Future research will benefit from using an indicator reference subset for glucose variability, metrics that are linked more directly to negative physiological effects, more methodological rigor, and/or better reportin

    Calibrating ADL-IADL scales to improve measurement accuracy and to extend the disability construct into the preclinical range: a systematic review

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Interest in measuring functional status among nondisabled older adults has increased in recent years. This is, in part, due to the notion that adults identified as 'high risk' for functional decline portray a state that is potentially easier to reverse than overt disability. Assessing relatively healthy older adults with traditional self-report measures (activities of daily living) has proven difficult because these instruments were initially developed for institutionalised older adults. Perhaps less evident, are problems associated with change scores and the potential for 'construct under-representation', which reflects the exclusion of important features of the construct (e.g., disability). Furthermore, establishing a formal hierarchy of functional status tells more than the typical simple summation of functional loss, and may have predictive value to the clinician monitoring older adults: if the sequence task difficulty is accelerated or out of order it may indicate the need for interventions.</p> <p>Methods</p> <p>This review identified studies that employed item response theory (IRT) to examine or revise functional status scales. IRT can be used to transform the ordinal nature of functional status scales to interval level data, which serves to increase diagnostic precision and sensitivity to clinical change. Furthermore, IRT can be used to rank items unequivocally along a hierarchy based on difficulty. It should be noted that this review is not concerned with contrasting IRT with more traditional classical test theory methodology.</p> <p>Results</p> <p>A systematic search of four databases (PubMed, Embase, CINAHL, and PsychInfo) resulted in the review of 2,192 manuscripts. Of these manuscripts, twelve met our inclusion/exclusion requirements and thus were targeted for further inspection.</p> <p>Conclusions</p> <p>Manuscripts presented in this review appear to summarise gerontology's best efforts to improve construct validity and content validity (i.e., ceiling effects) for scales measuring the early stages of activity restriction in community-dwelling older adults. Several scales in this review were exceptional at reducing ceiling effects, reducing gaps in coverage along the construct, as well as establishing a formal hierarchy of functional decline. These instrument modifications make it plausible to detect minor changes in difficulty for IADL items positioned at the edge of the disability continuum, which can be used to signal the onset of progressive type disability in older adults.</p

    Implementation of a parentage control system in Portuguese beef-cattle with a panel of microsatellite markers

    Get PDF
    A study was conducted to assess the feasibility of applying a panel of 10 microsatellite markers in parentage control of beef cattle in Portugal. In the first stage, DNA samples were collected from 475 randomly selected animals of the Charolais, Limousin and Preta breeds. Across breeds and genetic markers, means for average number of alleles, effective number of alleles, expected heterozygosity and polymorphic information content, were 8.20, 4.43, 0.733 and 0.70, respectively. Enlightenment from the various markers differed among breeds, but the set of 10 markers resulted in a combined probability above 0.9995 in the ability to exclude a random putative parent. The marker-set thus developed was later used for parentage control in a group of 140 calves from several breeds, where there was the suspicion of possible faulty parentage recording. Overall, 76.4% of the calves in this group were compatible with the recorded parents, with most incompatibilities due to misidentification of the dam. Efforts must be made to improve the quality of pedigree information, with particular emphasis on information recorded at the calf's birth

    Precaution or Integrated Responsibility Approach to Nanovaccines in Fish Farming? A Critical Appraisal of the UNESCO Precautionary Principle

    Get PDF
    Nanoparticles have multifaceted advantages in drug administration as vaccine delivery and hence hold promises for improving protection of farmed fish against diseases caused by pathogens. However, there are concerns that the benefits associated with distribution of nanoparticles may also be accompanied with risks to the environment and health. The complexity of the natural and social systems involved implies that the information acquired in quantified risk assessments may be inadequate for evidence-based decisions. One controversial strategy for dealing with this kind of uncertainty is the precautionary principle. A few years ago, an UNESCO expert group suggested a new approach for implementation of the principle. Here we compare the UNESCO principle with earlier versions and explore the advantages and disadvantages by employing the UNESCO version to the use of PLGA nanoparticles for delivery of vaccines in aquaculture. Finally, we discuss whether a combined scientific and ethical analysis that involves the concept of responsibility will enable approaches that can provide a supplement to the precautionary principle as basis for decision-making in areas of scientific uncertainty, such as the application of nanoparticles in the vaccination of farmed fish

    The Temporal Winner-Take-All Readout

    Get PDF
    How can the central nervous system make accurate decisions about external stimuli at short times on the basis of the noisy responses of nerve cell populations? It has been suggested that spike time latency is the source of fast decisions. Here, we propose a simple and fast readout mechanism, the temporal Winner-Take-All (tWTA), and undertake a study of its accuracy. The tWTA is studied in the framework of a statistical model for the dynamic response of a nerve cell population to an external stimulus. Each cell is characterized by a preferred stimulus, a unique value of the external stimulus for which it responds fastest. The tWTA estimate for the stimulus is the preferred stimulus of the cell that fired the first spike in the entire population. We then pose the questions: How accurate is the tWTA readout? What are the parameters that govern this accuracy? What are the effects of noise correlations and baseline firing? We find that tWTA sensitivity to the stimulus grows algebraically fast with the number of cells in the population, N, in contrast to the logarithmic slow scaling of the conventional rate-WTA sensitivity with N. Noise correlations in first-spike times of different cells can limit the accuracy of the tWTA readout, even in the limit of large N, similar to the effect that has been observed in population coding theory. We show that baseline firing also has a detrimental effect on tWTA accuracy. We suggest a generalization of the tWTA, the n-tWTA, which estimates the stimulus by the identity of the group of cells firing the first n spikes and show how this simple generalization can overcome the detrimental effect of baseline firing. Thus, the tWTA can provide fast and accurate responses discriminating between a small number of alternatives. High accuracy in estimation of a continuous stimulus can be obtained using the n-tWTA
    corecore