5,612 research outputs found

    New 1-systems of Q(6,q), q even

    Get PDF

    On 1-systems of Q(6,q), q even

    Get PDF

    m-systems of polar spaces and SPG reguli

    Get PDF
    It will be shown that every m-system of W2n+1(q), Q(-)(2n+1,q) or H(2n, q(2)) is an SPG regulus and hence gives rise to a semipartial geometry. We also briefly investigate the semipartial geometries, associated with the known m-systems of these polar spaces

    On regular hyperbolic fibrations

    Get PDF

    Personal identity processes from adolescence through the late 20s : age trends, functionality, and depressive symptoms

    Get PDF
    Personal identity formation constitutes a crucial developmental task during the teens and 20s. Using a recently developed five-dimensional identity model, this cross-sectional study (N = 5834) investigated age trends from ages 14 to 30 for different commitment and exploration processes. As expected, results indicated that, despite some fluctuations over time, commitment processes tended to increase in a linear fashion. Exploration in breadth and exploration in depth were characterized by quadratic trends, with the highest levels occurring in emerging adulthood. Further, the functionality of these identity processes, and especially of exploration, changed over time. Exploration in breadth and exploration in depth were strongly related to commitment processes especially in adolescence and emerging adulthood, but these exploration processes became increasingly associated with ruminative exploration and depressive symptoms in the late 20s. Theoretical implications and suggestions for future research are outlined

    Unsupervised patient representations from clinical notes with interpretable classification decisions

    Full text link
    We have two main contributions in this work: 1. We explore the usage of a stacked denoising autoencoder, and a paragraph vector model to learn task-independent dense patient representations directly from clinical notes. We evaluate these representations by using them as features in multiple supervised setups, and compare their performance with those of sparse representations. 2. To understand and interpret the representations, we explore the best encoded features within the patient representations obtained from the autoencoder model. Further, we calculate the significance of the input features of the trained classifiers when we use these pretrained representations as input.Comment: Accepted poster at NIPS 2017 Workshop on Machine Learning for Health (https://ml4health.github.io/2017/
    corecore