3 research outputs found

    A fast and accurate energy source emulator for wireless sensor networks

    Get PDF
    The capability to either minimize energy consumption in battery-operated devices, or to adequately exploit energy harvesting from various ambient sources, is central to the development and engineering of energy-neutral wireless sensor networks. However, the design of effective networked embedded systems targeting unlimited lifetime poses several challenges at different architectural levels. In particular, the heterogeneity, the variability, and the unpredictability of many energy sources, combined to changes in energy required by powered devices, make it difficult to obtain reproducible testing conditions, thus prompting the need of novel solutions addressing these issues. This paper introduces a novel embedded hardware-software solution aimed at emulating a wide spectrum of energy sources usually exploited to power sensor networks motes. The proposed system consists of a modular architecture featuring small factor form, low power requirements, and limited cost. An extensive experimental characterization confirms the validity of the embedded emulator in terms of flexibility, accuracy, and latency while a case study about the emulation of a lithium battery shows that the hardware-software platform does not introduce any measurable reduction of the accuracy of the model. The presented solution represents therefore a convenient solution for testing large-scale testbeds under realistic energy supply scenarios for wireless sensor networks

    Small transformers for Bioinformatics tasks

    Get PDF
    Recent trends in bioinformatics are trying to align the techniques to more modern approaches based on statistical natural language processing and deep learning, however state-of-the-art neural natural language processing techniques remain relatively unexplored in this domain. Large models are capable of achieving state-of-the-art performances, however, a typical bioinformatics lab has limited hardware resources. For this reason, this thesis focuses on small architectures, the training of which can be performed in a reasonable amount of time, while trying to limit or even negate the performance loss compared to SOTA. In particular, sparse attention mechanisms (such as the one proposed by Longformer) and parameter sharing techniques (such as the one proposed by Albert) are jointly explored with respect to two genetic languages: human genome and eukaryotic mitochondrial genome of 2000+ different species. Contextual embeddings for each token are learned via pretraining on a language understanding task, both in RoBERTa and Albert styles to highlight differences in performance and training efficiency. The learned contextual embeddings are finally exploited for fine tuning a task of localization (transcription start site in human promoters) and two tasks of sequence classification (12S metagenomics in fishes and chromatin profile prediction, single-class and multi-class respectively). Using smaller architectures, near SOTA performances are achieved in all the tasks already explored in literature, and a new SOTA has been established for the other tasks. Further experiments with larger architectures consistently improved the previous SOTA for every task
    corecore