609,867 research outputs found

    Engagement and Disengagement in Art Interventions with Memory Impairment

    Get PDF
    Introduction. Studies have shown art intervention to be an effective therapy for patients with memory impairments, leading to overall positive behaviors, increased quality of life and decreased caregiver burden. We conducted a preliminary study to evaluate and compare the effect of participation in weekly art appreciation and painting sessions on the behavior of memory impaired residents in an assisted living facility. Methods. Residents’ behaviors were observed during art appreciation and active painting sessions over a six-week period. Each session consisted of either viewing and discussing artwork or actively painting in the style of the artist discussed. Positive and negative behaviors were recorded and tallied throughout the sessions. Over the course of 12 sessions, the 7 observers made a total of 1957 observations of a variable patient population. The total number of both positive and negative behaviors was compared between activities, over time within sessions, as well as over the six weeks. Each session was percent normalized to the time interval with the highest occurrence of select behaviors. Results. Upward trends for positive and negative behaviors were noticed in ap- preciation and painting sessions respectively. The negative to positive engagements ratios for each painting session showed an increase in negative behaviors. Disengagement increased as the appreciation sessions progressed and decreased as painting sessions progressed. Overall, positive engagement increased in both appreciation and painting sessions. Conclusions. Despite several confounding variables encountered in this study, we demonstrated art appreciation and active painting to be a viable non- pharmacological therapeutic approach for individuals with memory impairments.https://scholarworks.uvm.edu/comphp_gallery/1258/thumbnail.jp

    Deep Active Learning for Named Entity Recognition

    Get PDF
    Deep learning has yielded state-of-the-art performance on many natural language processing tasks including named entity recognition (NER). However, this typically requires large amounts of labeled data. In this work, we demonstrate that the amount of labeled training data can be drastically reduced when deep learning is combined with active learning. While active learning is sample-efficient, it can be computationally expensive since it requires iterative retraining. To speed this up, we introduce a lightweight architecture for NER, viz., the CNN-CNN-LSTM model consisting of convolutional character and word encoders and a long short term memory (LSTM) tag decoder. The model achieves nearly state-of-the-art performance on standard datasets for the task while being computationally much more efficient than best performing models. We carry out incremental active learning, during the training process, and are able to nearly match state-of-the-art performance with just 25\% of the original training data

    The art of active memory

    Get PDF

    Attend to You: Personalized Image Captioning with Context Sequence Memory Networks

    Get PDF
    We address personalization issues of image captioning, which have not been discussed yet in previous research. For a query image, we aim to generate a descriptive sentence, accounting for prior knowledge such as the user's active vocabularies in previous documents. As applications of personalized image captioning, we tackle two post automation tasks: hashtag prediction and post generation, on our newly collected Instagram dataset, consisting of 1.1M posts from 6.3K users. We propose a novel captioning model named Context Sequence Memory Network (CSMN). Its unique updates over previous memory network models include (i) exploiting memory as a repository for multiple types of context information, (ii) appending previously generated words into memory to capture long-term information without suffering from the vanishing gradient problem, and (iii) adopting CNN memory structure to jointly represent nearby ordered memory slots for better context understanding. With quantitative evaluation and user studies via Amazon Mechanical Turk, we show the effectiveness of the three novel features of CSMN and its performance enhancement for personalized image captioning over state-of-the-art captioning models.Comment: Accepted paper at CVPR 201

    CD32 is expressed on cells with transcriptionally active HIV but does not enrich for HIV DNA in resting T cells

    Get PDF
    The persistence of HIV reservoirs, including latently infected, resting CD4+ T cells, is the major obstacle to cure HIV infection. CD32a expression was recently reported to mark CD4+ T cells harboring a replication-competent HIV reservoir during antiretroviral therapy (ART) suppression. We aimed to determine whether CD32 expression marks HIV latently or transcriptionally active infected CD4+ T cells. Using peripheral blood and lymphoid tissue of ART-treated HIV+ or SIV+ subjects, we found that most of the circulating memory CD32+ CD4+ T cells expressed markers of activation, including CD69, HLA-DR, CD25, CD38, and Ki67, and bore a TH2 phenotype as defined by CXCR3, CCR4, and CCR6. CD32 expression did not selectively enrich for HIV- or SIV-infected CD4+ T cells in peripheral blood or lymphoid tissue; isolated CD32+ resting CD4+ T cells accounted for less than 3% of the total HIV DNA in CD4+ T cells. Cell-associated HIV DNA and RNA loads in CD4+ T cells positively correlated with the frequency of CD32+ CD69+ CD4+ T cells but not with CD32 expression on resting CD4+ T cells. Using RNA fluorescence in situ hybridization, CD32 coexpression with HIV RNA or p24 was detected after in vitro HIV infection (peripheral blood mononuclear cell and tissue) and in vivo within lymph node tissue from HIV-infected individuals. Together, these results indicate that CD32 is not a marker of resting CD4+ T cells or of enriched HIV DNA–positive cells after ART; rather, CD32 is predominately expressed on a subset of activated CD4+ T cells enriched for transcriptionally active HIV after long-term ART

    Invited Review: Recent developments in vibration control of building and bridge structures

    Get PDF
    This paper presents a state-of-the-art review of recent articles published on active, passive, semi-active and hybrid vibration control systems for structures under dynamic loadings primarily since 2013. Active control systems include active mass dampers, active tuned mass dampers, distributed mass dampers, and active tendon control. Passive systems include tuned mass dampers (TMD), particle TMD, tuned liquid particle damper, tuned liquid column damper (TLCD), eddy-current TMD, tuned mass generator, tuned-inerter dampers, magnetic negative stiffness device, resetting passive stiffness damper, re-entering shape memory alloy damper, viscous wall dampers, viscoelastic dampers, and friction dampers. Semi-active systems include tuned liquid damper with floating roof, resettable variable stiffness TMD, variable friction dampers, semi-active TMD, magnetorheological dampers, leverage-type stiffness controllable mass damper, semi-active friction tendon. Hybrid systems include shape memory alloys-liquid column damper, shape memory alloy-based damper, and TMD-high damping rubber

    Re-Pair Compression of Inverted Lists

    Full text link
    Compression of inverted lists with methods that support fast intersection operations is an active research topic. Most compression schemes rely on encoding differences between consecutive positions with techniques that favor small numbers. In this paper we explore a completely different alternative: We use Re-Pair compression of those differences. While Re-Pair by itself offers fast decompression at arbitrary positions in main and secondary memory, we introduce variants that in addition speed up the operations required for inverted list intersection. We compare the resulting data structures with several recent proposals under various list intersection algorithms, to conclude that our Re-Pair variants offer an interesting time/space tradeoff for this problem, yet further improvements are required for it to improve upon the state of the art
    corecore