514 research outputs found

    Selective Vulnerabilities of N-methyl-D-aspartate (NMDA) Receptors During Brain Aging

    Get PDF
    N-methyl-D-aspartate (NMDA) receptors are present in high density within the cerebral cortex and hippocampus and play an important role in learning and memory. NMDA receptors are negatively affected by aging, but these effects are not uniform in many different ways. This review discusses the selective age-related vulnerabilities of different binding sites of the NMDA receptor complex, different subunits that comprise the complex, and the expression and functions of the receptor within different brain regions. Spatial reference, passive avoidance, and working memory, as well as place field stability and expansion all involve NMDA receptors. Aged animals show deficiencies in these functions, as compared to young, and some studies have identified an association between age-associated changes in the expression of NMDA receptors and poor memory performance. A number of diet and drug interventions have shown potential for reversing or slowing the effects of aging on the NMDA receptor. On the other hand, there is mounting evidence that the NMDA receptors that remain within aged individuals are not always associated with good cognitive functioning. This may be due to a compensatory response of neurons to the decline in NMDA receptor expression or a change in the subunit composition of the remaining receptors. These studies suggest that developing treatments that are aimed at preventing or reversing the effects of aging on the NMDA receptor may aid in ameliorating the memory declines that are associated with aging. However, we need to be mindful of the possibility that there may also be negative consequences in aged individuals

    The impact of resources on decision making

    Get PDF
    Decision making is a significant activity within industry and although much attention has been paid to the manner in which goals impact on how decision making is executed, there has been less focus on the impact decision making resources can have. This article describes an experiment that sought to provide greater insight into the impact that resources can have on how decision making is executed. Investigated variables included the experience levels of decision makers and the quality and availability of information resources. The experiment provided insights into the variety of impacts that resources can have upon decision making, manifested through the evolution of the approaches, methods, and processes used within it. The findings illustrated that there could be an impact on the decision-making process but not on the method or approach, the method and process but not the approach, or the approach, method, and process. In addition, resources were observed to have multiple impacts, which can emerge in different timescales. Given these findings, research is suggested into the development of resource-impact models that would describe the relationships existing between the decision-making activity and resources, together with the development of techniques for reasoning using these models. This would enhance the development of systems that could offer improved levels of decision support through managing the impact of resources on decision making

    Process algebra modelling styles for biomolecular processes

    Get PDF
    We investigate how biomolecular processes are modelled in process algebras, focussing on chemical reactions. We consider various modelling styles and how design decisions made in the definition of the process algebra have an impact on how a modelling style can be applied. Our goal is to highlight the often implicit choices that modellers make in choosing a formalism, and illustrate, through the use of examples, how this can affect expressability as well as the type and complexity of the analysis that can be performed

    Quantitative Regular Expressions for Arrhythmia Detection Algorithms

    Full text link
    Motivated by the problem of verifying the correctness of arrhythmia-detection algorithms, we present a formalization of these algorithms in the language of Quantitative Regular Expressions. QREs are a flexible formal language for specifying complex numerical queries over data streams, with provable runtime and memory consumption guarantees. The medical-device algorithms of interest include peak detection (where a peak in a cardiac signal indicates a heartbeat) and various discriminators, each of which uses a feature of the cardiac signal to distinguish fatal from non-fatal arrhythmias. Expressing these algorithms' desired output in current temporal logics, and implementing them via monitor synthesis, is cumbersome, error-prone, computationally expensive, and sometimes infeasible. In contrast, we show that a range of peak detectors (in both the time and wavelet domains) and various discriminators at the heart of today's arrhythmia-detection devices are easily expressible in QREs. The fact that one formalism (QREs) is used to describe the desired end-to-end operation of an arrhythmia detector opens the way to formal analysis and rigorous testing of these detectors' correctness and performance. Such analysis could alleviate the regulatory burden on device developers when modifying their algorithms. The performance of the peak-detection QREs is demonstrated by running them on real patient data, on which they yield results on par with those provided by a cardiologist.Comment: CMSB 2017: 15th Conference on Computational Methods for Systems Biolog

    Personality trait development in midlife: exploring the impact of psychological turning points

    Full text link
    This study examined long-term personality trait development in midlife and explored the impact of psychological turning points on personality change. Selfdefined psychological turning points reflect major changes in the ways people think or feel about an important part of their life, such as work, family, and beliefs about themselves and about the world. This study used longitudinal data from the Midlife in the US survey to examine personality trait development in adults aged 40–60 years. The Big Five traits were assessed in 1995 and 2005 by means of self-descriptive adjectives. Seven types of self-identified psychological turning points were obtained in 1995. Results indicated relatively high stability with respect to rankorders and mean-levels of personality traits, and at the same time reliable individual differences in change. This implies that despite the relative stability of personality traits in the overall sample, some individuals show systematic deviations from the sample mean-levels. Psychological turning points in general showed very little influence on personality trait change, although some effects were found for specific types of turning points that warrant further research, such as discovering that a close friend or relative was a much better person than one thought they were

    Parallel symbolic state-space exploration is difficult, but what is the alternative?

    Full text link
    State-space exploration is an essential step in many modeling and analysis problems. Its goal is to find the states reachable from the initial state of a discrete-state model described. The state space can used to answer important questions, e.g., "Is there a dead state?" and "Can N become negative?", or as a starting point for sophisticated investigations expressed in temporal logic. Unfortunately, the state space is often so large that ordinary explicit data structures and sequential algorithms cannot cope, prompting the exploration of (1) parallel approaches using multiple processors, from simple workstation networks to shared-memory supercomputers, to satisfy large memory and runtime requirements and (2) symbolic approaches using decision diagrams to encode the large structured sets and relations manipulated during state-space generation. Both approaches have merits and limitations. Parallel explicit state-space generation is challenging, but almost linear speedup can be achieved; however, the analysis is ultimately limited by the memory and processors available. Symbolic methods are a heuristic that can efficiently encode many, but not all, functions over a structured and exponentially large domain; here the pitfalls are subtler: their performance varies widely depending on the class of decision diagram chosen, the state variable order, and obscure algorithmic parameters. As symbolic approaches are often much more efficient than explicit ones for many practical models, we argue for the need to parallelize symbolic state-space generation algorithms, so that we can realize the advantage of both approaches. This is a challenging endeavor, as the most efficient symbolic algorithm, Saturation, is inherently sequential. We conclude by discussing challenges, efforts, and promising directions toward this goal

    Efficient Parallel Statistical Model Checking of Biochemical Networks

    Full text link
    We consider the problem of verifying stochastic models of biochemical networks against behavioral properties expressed in temporal logic terms. Exact probabilistic verification approaches such as, for example, CSL/PCTL model checking, are undermined by a huge computational demand which rule them out for most real case studies. Less demanding approaches, such as statistical model checking, estimate the likelihood that a property is satisfied by sampling executions out of the stochastic model. We propose a methodology for efficiently estimating the likelihood that a LTL property P holds of a stochastic model of a biochemical network. As with other statistical verification techniques, the methodology we propose uses a stochastic simulation algorithm for generating execution samples, however there are three key aspects that improve the efficiency: first, the sample generation is driven by on-the-fly verification of P which results in optimal overall simulation time. Second, the confidence interval estimation for the probability of P to hold is based on an efficient variant of the Wilson method which ensures a faster convergence. Third, the whole methodology is designed according to a parallel fashion and a prototype software tool has been implemented that performs the sampling/verification process in parallel over an HPC architecture
    • …
    corecore