105 research outputs found

    Individual differences in toddlers' social understanding and prosocial behavior: Disposition or socialization?

    Get PDF
    We examined how individual differences in social understanding contribute to variability in early-appearing prosocial behavior. Moreover, potential sources of variability in social understanding were explored and examined as additional possible predictors of prosocial behavior. Using a multi-method approach with both observed and parent-report measures, 325 children aged 18-30 months were administered measures of social understanding (e.g., use of emotion words; self-understanding), prosocial behavior (in separate tasks measuring instrumental helping, empathic helping, and sharing, as well as parent-reported prosociality at home), temperament (fearfulness, shyness, and social fear), and parental socialization of prosocial behavior in the family. Individual differences in social understanding predicted variability in empathic helping and parent-reported prosociality, but not instrumental helping or sharing. Parental socialization of prosocial behavior was positively associated with toddlers' social understanding, prosocial behavior at home, and instrumental helping in the lab, and negatively associated with sharing (possibly reflecting parents' increased efforts to encourage children who were less likely to share). Further, socialization moderated the association between social understanding and prosocial behavior, such that social understanding was less predictive of prosocial behavior among children whose parents took a more active role in socializing their prosociality. None of the dimensions of temperament was associated with either social understanding or prosocial behavior. Parental socialization of prosocial behavior is thus an important source of variability in children's early prosociality, acting in concert with early differences in social understanding, with different patterns of influence for different subtypes of prosocial behavior

    Brain Biochemistry and Personality: A Magnetic Resonance Spectroscopy Study

    Get PDF
    To investigate the biochemical correlates of normal personality we utilized proton magnetic resonance spectroscopy (1H-MRS). Our sample consisted of 60 subjects ranging in age from 18 to 32 (27 females). Personality was assessed with the NEO Five-Factor Inventory (NEO-FFI). We measured brain biochemistry within the precuneus, the cingulate cortex, and underlying white matter. We hypothesized that brain biochemistry within these regions would predict individual differences across major domains of personality functioning. Biochemical models were fit for all personality domains including Neuroticism, Extraversion, Openness, Agreeableness, and Conscientiousness. Our findings involved differing concentrations of Choline (Cho), Creatine (Cre), and N-acetylaspartate (NAA) in regions both within (i.e., posterior cingulate cortex) and white matter underlying (i.e., precuneus) the Default Mode Network (DMN). These results add to an emerging literature regarding personality neuroscience, and implicate biochemical integrity within the default mode network as constraining major personality domains within normal human subjects

    Building information modelling to cut disruption in housing retrofit

    Get PDF
    There is a large stock of solid-wall homes in the UK with poor thermal insulation and low energy performance. Although the UK government has supported efforts to improve these buildings, the identification of appropriate technical solutions that effectively improve the existing stock remains challenging. This research investigates how four dimensional building information modelling (4D BIM) could improve the retrofit of social housing, specifically that of ‘no-fines’ solid-wall homes, through the development of what-if scenarios that enable the analysis of alternative solutions considering costs, energy performance and disruption to users. This paper focuses on the use of 4D building information models to evaluate disruption to end users. The results indicate that the development of such models supports a better understanding of the retrofit process on site. It also supports the definition of production plans with as minimal disruption as possible to users while delivering energy-oriented and cost-effective solutions

    Kernel based methods for accelerated failure time model with ultra-high dimensional data

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Most genomic data have ultra-high dimensions with more than 10,000 genes (probes). Regularization methods with <it>L</it><sub>1 </sub>and <it>L<sub>p </sub></it>penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size <it>n </it>≪ <it>m </it>(the number of genes), directly identifying a small subset of genes from ultra-high (<it>m </it>> 10, 000) dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds) of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes.</p> <p>Results</p> <p>The accelerated failure time (AFT) model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller <it>n </it>× <it>n </it>matrix. It is very efficient when the number of unknown variables (genes) is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited.</p> <p>Conclusions</p> <p>Our proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.</p

    Single Molecule Analysis Research Tool (SMART): An Integrated Approach for Analyzing Single Molecule Data

    Get PDF
    Single molecule studies have expanded rapidly over the past decade and have the ability to provide an unprecedented level of understanding of biological systems. A common challenge upon introduction of novel, data-rich approaches is the management, processing, and analysis of the complex data sets that are generated. We provide a standardized approach for analyzing these data in the freely available software package SMART: Single Molecule Analysis Research Tool. SMART provides a format for organizing and easily accessing single molecule data, a general hidden Markov modeling algorithm for fitting an array of possible models specified by the user, a standardized data structure and graphical user interfaces to streamline the analysis and visualization of data. This approach guides experimental design, facilitating acquisition of the maximal information from single molecule experiments. SMART also provides a standardized format to allow dissemination of single molecule data and transparency in the analysis of reported data
    corecore