131 research outputs found

    Federated Self-Supervised Learning of Multi-Sensor Representations for Embedded Intelligence

    Get PDF
    Smartphones, wearables, and Internet of Things (IoT) devices produce a wealth of data that cannot be accumulated in a centralized repository for learning supervised models due to privacy, bandwidth limitations, and the prohibitive cost of annotations. Federated learning provides a compelling framework for learning models from decentralized data, but conventionally, it assumes the availability of labeled samples, whereas on-device data are generally either unlabeled or cannot be annotated readily through user interaction. To address these issues, we propose a self-supervised approach termed \textit{scalogram-signal correspondence learning} based on wavelet transform to learn useful representations from unlabeled sensor inputs, such as electroencephalography, blood volume pulse, accelerometer, and WiFi channel state information. Our auxiliary task requires a deep temporal neural network to determine if a given pair of a signal and its complementary viewpoint (i.e., a scalogram generated with a wavelet transform) align with each other or not through optimizing a contrastive objective. We extensively assess the quality of learned features with our multi-view strategy on diverse public datasets, achieving strong performance in all domains. We demonstrate the effectiveness of representations learned from an unlabeled input collection on downstream tasks with training a linear classifier over pretrained network, usefulness in low-data regime, transfer learning, and cross-validation. Our methodology achieves competitive performance with fully-supervised networks, and it outperforms pre-training with autoencoders in both central and federated contexts. Notably, it improves the generalization in a semi-supervised setting as it reduces the volume of labeled data required through leveraging self-supervised learning.Comment: Accepted for publication at IEEE Internet of Things Journa

    Handwritten Digit Recognition by Fourier-Packet Descriptors

    Get PDF
    Any statistical pattern recognition system includes a feature extraction component. For character patterns, several feature families have been tested, such as the Fourier-Wavelet Descriptors. We are proposing here a generalization of this family: the Fourier-Packet Descriptors. We have selected sets of these features and tested them on handwritten digits: the error rate was 1.55% with a polynomial classifier for a 70 features set and 1.97% with a discriminative learning quadratic discriminant function for a 40 features set

    Digital Image Access & Retrieval

    Get PDF
    The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio

    Machine Learning-based Brokers for Real-time Classification of the LSST Alert Stream

    Full text link
    The unprecedented volume and rate of transient events that will be discovered by the Large Synoptic Survey Telescope (LSST) demands that the astronomical community update its followup paradigm. Alert-brokers -- automated software system to sift through, characterize, annotate and prioritize events for followup -- will be critical tools for managing alert streams in the LSST era. The Arizona-NOAO Temporal Analysis and Response to Events System (ANTARES) is one such broker. In this work, we develop a machine learning pipeline to characterize and classify variable and transient sources only using the available multiband optical photometry. We describe three illustrative stages of the pipeline, serving the three goals of early, intermediate and retrospective classification of alerts. The first takes the form of variable vs transient categorization, the second, a multi-class typing of the combined variable and transient dataset, and the third, a purity-driven subtyping of a transient class. While several similar algorithms have proven themselves in simulations, we validate their performance on real observations for the first time. We quantitatively evaluate our pipeline on sparse, unevenly sampled, heteroskedastic data from various existing observational campaigns, and demonstrate very competitive classification performance. We describe our progress towards adapting the pipeline developed in this work into a real-time broker working on live alert streams from time-domain surveys.Comment: 33 pages, 14 figures, submitted to ApJ

    Affective Recommendation of Movies Based on Selected Connotative Features

    Get PDF
    The apparent difficulty in assessing emotions elicited by movies and the undeniable high variability in subjects emotional responses to filmic content have been recently tackled by exploring film connotative properties: the set of shooting and editing conventions that help in transmitting meaning to the audience. Connotation provides an intermediate representation which exploits the objectivity of audiovisual descriptors to predict the subjective emotional reaction of single users. This is done without the need of registering users physiological signals neither by employing other people highly variable emotional rates, but just relying on the inter-subjectivity of connotative concepts and on the knowledge of users reactions to similar stimuli. This work extends previous by extracting audiovisual and film grammar descriptors and, driven by users rates on connotative properties, creates a shared framework where movie scenes are placed, compared and recommended according to connotation. We evaluate the potential of the proposed system by asking users to assess the ability of connotation in suggesting filmic content able to target their affective requests

    Combining DNA Methylation with Deep Learning Improves Sensitivity and Accuracy of Eukaryotic Genome Annotation

    Get PDF
    Thesis (Ph.D.) - Indiana University, School of Informatics, Computing, and Engineering, 2020The genome assembly process has significantly decreased in computational complexity since the advent of third-generation long-read technologies. However, genome annotations still require significant manual effort from scientists to produce trust-worthy annotations required for most bioinformatic analyses. Current methods for automatic eukaryotic annotation rely on sequence homology, structure, or repeat detection, and each method requires a separate tool, making the workflow for a final product a complex ensemble. Beyond the nucleotide sequence, one important component of genetic architecture is the presence of epigenetic marks, including DNA methylation. However, no automatic annotation tools currently use this valuable information. As methylation data becomes more widely available from nanopore sequencing technology, tools that take advantage of patterns in this data will be in demand. The goal of this dissertation was to improve the annotation process by developing and training a recurrent neural network (RNN) on trusted annotations to recognize multiple classes of elements from both the reference sequence and DNA methylation. We found that our proposed tool, RNNotate, detected fewer coding elements than GlimmerHMM and Augustus, but those predictions were more often correct. When predicting transposable elements, RNNotate was more accurate than both Repeat-Masker and RepeatScout. Additionally, we found that RNNotate was significantly less sensitive when trained and run without DNA methylation, validating our hypothesis. To our best knowledge, we are not only the first group to use recurrent neural networks for eukaryotic genome annotation, but we also innovated in the data space by utilizing DNA methylation patterns for prediction

    Feature Extraction Methods for Character Recognition

    Get PDF
    Not Include
    • …
    corecore