804,952 research outputs found

    A Quadratically Regularized Functional Canonical Correlation Analysis for Identifying the Global Structure of Pleiotropy with NGS Data

    Full text link
    Investigating the pleiotropic effects of genetic variants can increase statistical power, provide important information to achieve deep understanding of the complex genetic structures of disease, and offer powerful tools for designing effective treatments with fewer side effects. However, the current multiple phenotype association analysis paradigm lacks breadth (number of phenotypes and genetic variants jointly analyzed at the same time) and depth (hierarchical structure of phenotype and genotypes). A key issue for high dimensional pleiotropic analysis is to effectively extract informative internal representation and features from high dimensional genotype and phenotype data. To explore multiple levels of representations of genetic variants, learn their internal patterns involved in the disease development, and overcome critical barriers in advancing the development of novel statistical methods and computational algorithms for genetic pleiotropic analysis, we proposed a new framework referred to as a quadratically regularized functional CCA (QRFCCA) for association analysis which combines three approaches: (1) quadratically regularized matrix factorization, (2) functional data analysis and (3) canonical correlation analysis (CCA). Large-scale simulations show that the QRFCCA has a much higher power than that of the nine competing statistics while retaining the appropriate type 1 errors. To further evaluate performance, the QRFCCA and nine other statistics are applied to the whole genome sequencing dataset from the TwinsUK study. We identify a total of 79 genes with rare variants and 67 genes with common variants significantly associated with the 46 traits using QRFCCA. The results show that the QRFCCA substantially outperforms the nine other statistics.Comment: 64 pages including 12 figure

    On the structure of natural human movement

    Get PDF
    Understanding of human motor control is central to neuroscience with strong implications in the fields of medicine, robotics and evolution. It is thus surprising that the vast majority of motor control studies have focussed on human movement in the laboratory while neglecting behaviour in natural environments. We developed an experimental paradigm to quantify human behaviour in high resolution over extended periods of time in ecologically relevant environments. This allows us to discover novel insights and contradictory evidence to well-established findings obtained in controlled laboratory conditions. Using our data, we map the statistics of natural human movement and their variability between people. The variability and complexity of the data recorded in these settings required us to develop new tools to extract meaningful information in an objective, data-driven fashion. Moving from descriptive statistics to structure, we identify stable structures of movement coordination, particularly within the arm-hand area. Combining our data with numerous published findings, we argue that current hypotheses that the brain simplifies motor control problems by dimensionality reduction are too reductionist. We propose an alternative hypothesis derived from sparse coding theory, a concept which has been successfully applied to the sensory system. To investigate this idea, we develop an algorithm for unsupervised identification of sparse structures in natural movement data. Our method outperforms state-of-the-art algorithms for accuracy and data-efficiency. Applying this method to hand data reveals a dictionary of \emph{sparse eigenmotions} (SEMs) which are well preserved across multiple subjects. These are highly efficient and invariant representation of natural movement, and suggest a potential higher-order grammatical structure or ``movement language''. Our findings make a number of testable predictions about neural coding of movement in the cortex. This has direct consequences for advancing research on dextrous prosthetics and robotics, and has profound implications for our understanding of how the brain controls our body.Open Acces

    Sense of belonging for Black families at their child\u27s school: factors that promote, inhibit, and oppress.

    Get PDF
    The purpose of this qualitative study was to investigate how the perceptions about family engagement structures influence the sense of belonging for Black families at their child’s school. Qualitative participatory action research (PAR) design was used for this study and purposeful homogenous sampling (Creswell, 2012) was used to select participants. The participants for this study were Black parents/guardians of students in grades K-12 in Jefferson County, Kentucky. This qualitative study used the World Café method to collect multiple data points that were triangulated to contribute to the trustworthiness of the study (Glesne, 2006). Epstein’s Model of Overlapping Spheres and Bourdieu’s Theory of Structural Constraints were the theoretical frameworks utilized for this study. To examine the potential for oppression in the systems and structures of this case study and analyze the marginalization of the lived experiences of the participants, a Critical Race Theory lens was used. Examining the counter-narratives of the Black parents/guardians in this study provided insight into the ways in which race, and cultural capital may influence a marginalized sense of belonging for Black families at their child’s school. Data were analyzed through three cycles of coding to determine themes and values. Themes that emerged included teacher/school actions that promote, inhibit, and oppress trust and sense of belonging and representation. Values, attitudes, and beliefs that emerged as important to the participants based on their responses include authentic relationships grounded in connection and mutual respect, representation in staffing and curricular resources and cultural understanding and appreciation. When discussing factors that promote a sense of belonging at their child’s school, participants mentioned trusting relationships as a foundational element. Findings indicated that Black families want to be engaged with their child’s school, but traditional, school-centric family engagement structures often inhibit and oppress their sense of belonging, which impacts their opportunity to engage in a partnership with the school. Parent recommendations included a need for increased professional development for employees to better understand Black culture. Parents also advocated for schools to stop blaming Black parents and start creating intentional opportunities for Black families to have a presence at the school with the goal of garnering feedback that administrators will consider when creating systems and structures. The implications of this study suggest that schools must stop expecting families to conform and must begin to disrupt the inequities that exist for not only students but families as well. Representation matters and White educators must increase their awareness about the roles of race and culture in educational and societal structures and be willing to confront and dismantle racialized systems

    Big data and the SP theory of intelligence

    Get PDF
    This article is about how the "SP theory of intelligence" and its realisation in the "SP machine" may, with advantage, be applied to the management and analysis of big data. The SP system -- introduced in the article and fully described elsewhere -- may help to overcome the problem of variety in big data: it has potential as "a universal framework for the representation and processing of diverse kinds of knowledge" (UFK), helping to reduce the diversity of formalisms and formats for knowledge and the different ways in which they are processed. It has strengths in the unsupervised learning or discovery of structure in data, in pattern recognition, in the parsing and production of natural language, in several kinds of reasoning, and more. It lends itself to the analysis of streaming data, helping to overcome the problem of velocity in big data. Central in the workings of the system is lossless compression of information: making big data smaller and reducing problems of storage and management. There is potential for substantial economies in the transmission of data, for big cuts in the use of energy in computing, for faster processing, and for smaller and lighter computers. The system provides a handle on the problem of veracity in big data, with potential to assist in the management of errors and uncertainties in data. It lends itself to the visualisation of knowledge structures and inferential processes. A high-parallel, open-source version of the SP machine would provide a means for researchers everywhere to explore what can be done with the system and to create new versions of it.Comment: Accepted for publication in IEEE Acces
    • …
    corecore