57 research outputs found

    Autism as a disorder of neural information processing: directions for research and targets for therapy

    Get PDF
    The broad variation in phenotypes and severities within autism spectrum disorders suggests the involvement of multiple predisposing factors, interacting in complex ways with normal developmental courses and gradients. Identification of these factors, and the common developmental path into which theyfeed, is hampered bythe large degrees of convergence from causal factors to altered brain development, and divergence from abnormal brain development into altered cognition and behaviour. Genetic, neurochemical, neuroimaging and behavioural findings on autism, as well as studies of normal development and of genetic syndromes that share symptoms with autism, offer hypotheses as to the nature of causal factors and their possible effects on the structure and dynamics of neural systems. Such alterations in neural properties may in turn perturb activity-dependent development, giving rise to a complex behavioural syndrome many steps removed from the root causes. Animal models based on genetic, neurochemical, neurophysiological, and behavioural manipulations offer the possibility of exploring these developmental processes in detail, as do human studies addressing endophenotypes beyond the diagnosis itself

    A comparative analysis of predictive models of morbidity in intensive care unit after cardiac surgery – Part I: model planning

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Different methods have recently been proposed for predicting morbidity in intensive care units (ICU). The aim of the present study was to critically review a number of approaches for developing models capable of estimating the probability of morbidity in ICU after heart surgery. The study is divided into two parts. In this first part, popular models used to estimate the probability of class membership are grouped into distinct categories according to their underlying mathematical principles. Modelling techniques and intrinsic strengths and weaknesses of each model are analysed and discussed from a theoretical point of view, in consideration of clinical applications.</p> <p>Methods</p> <p>Models based on Bayes rule, <it>k-</it>nearest neighbour algorithm, logistic regression, scoring systems and artificial neural networks are investigated. Key issues for model design are described. The mathematical treatment of some aspects of model structure is also included for readers interested in developing models, though a full understanding of mathematical relationships is not necessary if the reader is only interested in perceiving the practical meaning of model assumptions, weaknesses and strengths from a user point of view.</p> <p>Results</p> <p>Scoring systems are very attractive due to their simplicity of use, although this may undermine their predictive capacity. Logistic regression models are trustworthy tools, although they suffer from the principal limitations of most regression procedures. Bayesian models seem to be a good compromise between complexity and predictive performance, but model recalibration is generally necessary. <it>k</it>-nearest neighbour may be a valid non parametric technique, though computational cost and the need for large data storage are major weaknesses of this approach. Artificial neural networks have intrinsic advantages with respect to common statistical models, though the training process may be problematical.</p> <p>Conclusion</p> <p>Knowledge of model assumptions and the theoretical strengths and weaknesses of different approaches are fundamental for designing models for estimating the probability of morbidity after heart surgery. However, a rational choice also requires evaluation and comparison of actual performances of locally-developed competitive models in the clinical scenario to obtain satisfactory agreement between local needs and model response. In the second part of this study the above predictive models will therefore be tested on real data acquired in a specialized ICU.</p

    Increased Sensory Processing Atypicalities in Parents of Multiplex ASD Families Versus Typically Developing and Simplex ASD Families

    Get PDF
    Recent studies have suggested that sensory processing atypicalities may share genetic influences with autism spectrum disorder (ASD). To further investigate this, the adolescent/adult sensory profile (AASP) questionnaire was distributed to 85 parents of typically developing children (P-TD), 121 parents from simplex ASD families (SPX), and 54 parents from multiplex ASD families (MPX). After controlling for gender and presence of mental disorders, results showed that MPX parents significantly differed from P-TD parents in all four subscales of the AASP. Differences between SPX and MPX parents reached significance in the Sensory Sensitivity subscale and also in subsequent modality-specific analyses in the auditory and visual domains. Our finding that parents with high genetic liability for ASD (i.e., MPX) had more sensory processing atypicalities than parents with low (i.e., SPX) or no (i.e., P-TD) ASD genetic liability suggests that sensory processing atypicalities may contribute to the genetic susceptibility for ASD

    Data Descriptor: A global multiproxy database for temperature reconstructions of the Common Era

    Get PDF
    Reproducible climate reconstructions of the Common Era (1 CE to present) are key to placing industrial-era warming into the context of natural climatic variability. Here we present a community-sourced database of temperature-sensitive proxy records from the PAGES2k initiative. The database gathers 692 records from 648 locations, including all continental regions and major ocean basins. The records are from trees, ice, sediment, corals, speleothems, documentary evidence, and other archives. They range in length from 50 to 2000 years, with a median of 547 years, while temporal resolution ranges from biweekly to centennial. Nearly half of the proxy time series are significantly correlated with HadCRUT4.2 surface temperature over the period 1850-2014. Global temperature composites show a remarkable degree of coherence between high-and low-resolution archives, with broadly similar patterns across archive types, terrestrial versus marine locations, and screening criteria. The database is suited to investigations of global and regional temperature variability over the Common Era, and is shared in the Linked Paleo Data (LiPD) format, including serializations in Matlab, R and Python.(TABLE)Since the pioneering work of D'Arrigo and Jacoby1-3, as well as Mann et al. 4,5, temperature reconstructions of the Common Era have become a key component of climate assessments6-9. Such reconstructions depend strongly on the composition of the underlying network of climate proxies10, and it is therefore critical for the climate community to have access to a community-vetted, quality-controlled database of temperature-sensitive records stored in a self-describing format. The Past Global Changes (PAGES) 2k consortium, a self-organized, international group of experts, recently assembled such a database, and used it to reconstruct surface temperature over continental-scale regions11 (hereafter, ` PAGES2k-2013').This data descriptor presents version 2.0.0 of the PAGES2k proxy temperature database (Data Citation 1). It augments the PAGES2k-2013 collection of terrestrial records with marine records assembled by the Ocean2k working group at centennial12 and annual13 time scales. In addition to these previously published data compilations, this version includes substantially more records, extensive new metadata, and validation. Furthermore, the selection criteria for records included in this version are applied more uniformly and transparently across regions, resulting in a more cohesive data product.This data descriptor describes the contents of the database, the criteria for inclusion, and quantifies the relation of each record with instrumental temperature. In addition, the paleotemperature time series are summarized as composites to highlight the most salient decadal-to centennial-scale behaviour of the dataset and check mutual consistency between paleoclimate archives. We provide extensive Matlab code to probe the database-processing, filtering and aggregating it in various ways to investigate temperature variability over the Common Era. The unique approach to data stewardship and code-sharing employed here is designed to enable an unprecedented scale of investigation of the temperature history of the Common Era, by the scientific community and citizen-scientists alike
    • 

    corecore