2 research outputs found

    Topographic mapping for quality inspection and intelligent filtering of smart-bracelet data

    Get PDF
    Wrist-worn wearable devices equipped with heart activity sensors can provide valuable data that can be used for preventative health. However, hearth activity analysis from these devices suffers from noise introduced by motion artifacts. Methods traditionally used to remove outliers based on motion data can yield to discarding clean data, if some movement was present, and accepting noisy data, i.e., subject was still but the sensor was misplaced. This work shows that self-organizing maps (SOMs) can be used to effectively accept or reject sections of heart data collected from unreliable devices, such as wrist-worn devices. In particular, the proposed SOM-based filter can accept a larger amount of measurements (less false negatives) with an higher overall quality with respect to methods solely based on statistical analysis of motion data. We provide an empirical analysis on real-world wearable data, comprising heart and motion data of users. We show how topographic mapping can help identifying and interpreting patterns in the sensor data and help relating them to an assessment of user state. More importantly, our experimental results show the proposed approach is able to retain almost twice the amount of data while keeping samples with an error that is an order of magnitude lower with respect to a filter based on accelerometric data

    Benchmarking automatic prompt learning methods in the Italian language

    No full text
    In recent years the NLP field has witnessed two major changes, first the development of large language models that give birth to the pre-training, fine-tuning paradigm, and then the diffusion of prompting methods that allow using Language Models (LM) without updating their large number of parameters. In prompting, the model parameters are left unchanged and only an instruction in natural language, called prompt, is optimized. The main criticality of this method is choosing the right prompt to extract knowledge from an LM, to address this issue, several algorithms have been proposed that automatically search for the best-performing prompt. Moreover, some algorithms directly search the embedding space of an LM using "soft prompts" composed of virtual tokens that do not correspond to a natural language word. This work explores the potential of soft prompting on tasks in the Italian language. In particular, two popular algorithms, namely P-tuning and Prefix tuning are applied to 10 different classification tasks selected from the EVALITA 2023 evaluation campaign. Experimental results using these prompt techniques in combination with two LMs pre-trained in Italian (BERTino and IT-5), show how the use of soft prompts is beneficial also to solve tasks in non-Enlgish languages like Italian, and how soft prompting allows to train models that require little or no task-specific tuning. In particular, Prefix tuning combined with IT-5 can achieve good performances without any hyperparameter optimization, and also in low data settings
    corecore