88 research outputs found

    Solving Challenges in Deep Unsupervised Methods for Anomaly Detection

    Get PDF
    Anomaly Detection (AD) is to identify samples that differ from training observations in some way. Those samples that do not follow the distribution of normal data are called outliers or anomalies. In this thesis, we examined two different challenges related to deep learning-based anomaly detection methods. The first challenge is the generalizability to outliers. A wide range of unsupervised anomaly detection methods use deep autoencoders as a foundation. However, a notable limitation of deep autoencoders is that they generalize to outliers and reconstruct them with low error. In order to overcome this issue, we propose an adversarial framework consisting of two competing components, an adversarial distorter, and an autoencoder. During training, the adversarial distorter produces perturbations that are applied to the encoder’s latent space to maximize the reconstruction error. The autoencoder attempts to neutralize the effects of these perturbations to minimize the reconstruction error. Another challenge is the high computational cost, complexity, and unstable training procedures of deep anomaly detection methods. Despite being successful at anomaly detection, deep neural networks are difficult to deploy in real-world applications because of this challenge. We overcome this problem by using a simple learning procedure that trains a lightweight convolutional neural network. We propose to solve anomaly detection as a supervised regression problem. We label normal and anomalous data using two separable distributions of continuous values. As a way to compensate for the lack of anomalous samples during training, we use straightforward image augmentation techniques to create a distinct set of anomalous samples. An augmented set has a distribution that is similar to normal data, but deviates slightly from it, while real anomalies should have a further distribution. Consequently, training a regressor on normal and these augmented samples will result in more distinct distributions of labels for normal and real anomalous data points. In several image and video anomaly detection benchmarks, our methods outperform cutting-edge approaches

    Estimating general motion and intensity from event cameras

    Get PDF
    Robotic vision algorithms have become widely used in many consumer products which enabled technologies such as autonomous vehicles, drones, augmented reality (AR) and virtual reality (VR) devices to name a few. These applications require vision algorithms to work in real-world environments with extreme lighting variations and fast moving objects. However, robotic vision applications rely often on standard video cameras which face severe limitations in fast-moving scenes or by bright light sources which diminish the image quality with artefacts like motion blur or over-saturation. To address these limitations, the body of work presented here investigates the use of alternative sensor devices which mimic the superior perception properties of human vision. Such silicon retinas were proposed by neuromorphic engineering, and we focus here on one such biologically inspired sensor called the event camera which offers a new camera paradigm for real-time robotic vision. The camera provides a high measurement rate, low latency, high dynamic range, and low data rate. The signal of the camera is composed of a stream of asynchronous events at microsecond resolution. Each event indicates when individual pixels registers a logarithmic intensity changes of a pre-set threshold size. Using this novel signal has proven to be very challenging in most computer vision problems since common vision methods require synchronous absolute intensity information. In this thesis, we present for the first time a method to reconstruct an image and es- timation motion from an event stream without additional sensing or prior knowledge of the scene. This method is based on coupled estimations of both motion and intensity which enables our event-based analysis, which was previously only possible with severe limitations. We also present the first machine learning algorithm for event-based unsu- pervised intensity reconstruction which does not depend on an explicit motion estimation and reveals finer image details. This learning approach does not rely on event-to-image examples, but learns from standard camera image examples which are not coupled to the event data. In experiments we show that the learned reconstruction improves upon our handcrafted approach. Finally, we combine our learned approach with motion estima- tion methods and show the improved intensity reconstruction also significantly improves the motion estimation results. We hope our work in this thesis bridges the gap between the event signal and images and that it opens event cameras to practical solutions to overcome the current limitations of frame-based cameras in robotic vision.Open Acces

    Natural stimuli for mice: environment statistics and behavioral responses

    Get PDF

    Unsupervised Pretraining of Neural Networks with Multiple Targets using Siamese Architectures

    Get PDF
    A model's response for a given input pattern depends on the seen patterns in the training data. The larger the amount of training data, the more likely edge cases are covered during training. However, the more complex input patterns are, the larger the model has to be. For very simple use cases, a relatively small model can achieve very high test accuracy in a matter of minutes. On the other hand, a large model has to be trained for multiple days. The actual time to develop a model of that size can be considered to be even greater since often many different architecture types and hyper-parameter configurations have to be tried. An extreme case for a large model is the recently released GPT-3 model. This model consists of 175 billion parameters and was trained using 45 terabytes of text data. The model was trained to generate text and is able to write news articles and source code based only on a rough description. However, a model like this is only creatable for researchers with access to special hardware or immense amounts of data. Thus, it is desirable to find less resource-intensive training approaches to enable other researchers to create well performing models. This thesis investigates the use of pre-trained models. If a model has been trained on one dataset and is then trained on another similar data, it faster learns to adjust to similar patterns than a model that has not yet seen any of the task's pattern. Thus, the learned lessons from one training are transferred to another task. During pre-training, the model is trained to solve a specific task like predicting the next word in a sequence or first encoding an input image before decoding it. Such models contain an encoder and a decoder part. When transferring that model to another task, parts of the model's layers will be removed. As a result, having to discard fewer weights results in faster training since less time has to be spent on training parts of a model that are only needed to solve an auxiliary task. Throughout this thesis, the concept of siamese architectures will be discussed since when using that architecture, no parameters have to be discarded when transferring a model trained with that approach onto another task. Thus, the siamese pre-training approach positively impacts the need for resources like time and energy use and drives the development of new models in the direction of Green AI. The models trained with this approach will be evaluated by comparing them to models trained with other pre-training approaches as well as large existing models. It will be shown that the models trained for the tasks in this thesis perform as good as externally pre-trained models, given the right choice of data and training targets: It will be shown that the number and type of training targets during pre-training impacts a model's performance on transfer learning tasks. The use cases presented in this thesis cover different data from different domains to show that the siamese training approach is widely applicable. Consequently, researchers are motivated to create their own pre-trained models for data domains, for which there are no existing pre-trained models.Die Vorhersage eines Models hängt davon ab, welche Muster in den während des Trainings benutzen Daten vorhanden sind. Je größer die Menge an Trainingsdaten ist, desto wahrscheinlicher ist es, dass Grenzfälle in den Daten vorkommen. Je größer jedoch die Anzahl der zu lernenden Mustern ist, desto größer muss jedoch das Modell sein. Für einfache Anwendungsfälle ist es möglich ein kleines Modell in wenigen Minuten zu trainieren um bereits gute Ergebnisse auf Testdaten zu erhalten. Für komplexe Anwendungsfälle kann ein dementsprechend großes Modell jedoch bis zu mehrere Tage benötigen um ausreichend gut zu sein. Ein Extremfall für ein großes Modell ist das kürzlich veröffentlichte Modell mit dem Namen GPT-3, welches aus 175 Milliarden Parametern besteht und mit Trainingsdaten in der Größenordnung von 45 Terabyte trainiert wurde. Das Modell wurde trainiert Text zu generieren und ist in der Lage Nachrichtenartikel zu generieren, basierend auf einer groben Ausgangsbeschreibung. Solch ein Modell können nur solche Forscher entwickeln, die Zugang zu entsprechender Hardware und Datenmengen haben. Es demnach von Interesse Trainingsvorgehen dahingehend zu verbessern, dass auch mit wenig vorhandenen Ressourcen Modelle für komplexe Anwendungsfälle trainiert werden können. Diese Arbeit beschäfigt sich mit dem Vortrainieren von neuronalen Netzen. Wenn ein neuronales Netz auf einem Datensatz trainiert wurde und dann auf einem zweiten Datensatz weiter trainiert wird, lernt es die Merkmale des zweiten Datensatzes schneller, da es nicht von Grund auf Muster lernen muss sondern auf bereits gelerntes zurückgreifen kann. Man spricht dann davon, dass das Wissen transferiert wird. Während des Vortrainierens bekommt ein Modell häufig eine Aufgabe wie zum Beispiel, im Fall von Bilddaten, die Trainingsdaten erst zu komprimieren und dann wieder herzustellen. Bei Textdaten könnte ein Modell vortrainiert werden, indem es einen Satz als Eingabe erhält und dann den nächsten Satz aus dem Quelldokument vorhersagen muss. Solche Modelle bestehen dementsprechend aus einem Encoder und einem Decoder. Der Nachteil bei diesem Vorgehen ist, dass der Decoder lediglich für das Vortrainieren benötigt wird und für den späteren Anwendungsfall nur der Encoder benötigt wird. Zentraler Bestandteil in dieser Arbeit ist deswegen das Untersuchen der Vorteile und Nachteile der siamesische Modellarchitektur. Diese Architektur besteht lediglich aus einem Encoder, was dazu führt, dass das Vortrainieren kostengünstiger ist, da weniger Gewichte trainiert werden müssen. Der wesentliche wissenschaftliche Beitrag liegt darin, dass die siamische Architektur ausführlich verglichen wird mit vergleichbaren Ansätzen. Dabei werden bestimmte Nachteile gefunden, wie zum Beispiel dass die Auswahl einer Ähnlichkeitsfunktion oder das Zusammenstellen der Trainingsdaten große Auswirkung auf das Modelltraining haben. Es wird erarbeitet, welche Ähnlichkeitsfunktion in welchen Kontexten empfohlen wird sowie wie andere Nachteile der siamischen Architektur durch die Anpassung der Trainingsziele ausgeglichen werden können. Die entsprechenden Experimente werden dabei auf Daten aus unterschiedlichen Domänen ausgeführt um zu zeigen, dass der entsprechende Ansatz universell anwendbar ist. Die Ergebnisse aus konkreten Anwendungsfällen zeigen außerdem, dass die innerhalb dieser Arbeit entwickelten Modelle ähnlich gut abschneiden wie extern verfügbare Modelle, welche mit großem Ressourcenaufwand trainiert worden sind. Dies zeigt, dass mit Bedacht erarbeitete Architekturen die benötigten Ressourcen verringern können

    Emotional body language synthesis for humanoid robots

    Get PDF
    Some of the chapters of this thesis are based on research published by the author. Chapter 4 is based on Marmpena M., Lim, A., and Dahl, T. S. (2018). How does the robot feel? Perception of valence and arousal in emotional body language. Paladyn, Journal of Behavioral Robotics, 9(1), 168-182. DOI: https://doi.org/10.1515/pjbr-2018-0012. Chapter 6 is based on Marmpena M., Lim, A., Dahl, T. S., and Hemion, N. (2019). Generating robotic emotional body language with Variational Autoencoders. In Proceedings of the 8th International Conference on Affective Computing and Intelligent Interaction (ACII), pages 545–551. DOI:10.1109/ACII.2019.8925459. Chapter 7 extends Marmpena M., Garcia, F., and Lim, A. (2020). Generating robotic emotional body language of targeted valence and arousal with Conditional Variational Autoencoders. In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’20, page 357–359. DOI: https://doi.org/10.1145/3371382.3378360. The designed or generated robotic emotional body language expressions data presented in this thesis are publicly available: https://github.com/minamar/rebl-pepper-dataIn the next decade, societies will witness a rise in service robots deployed in social environments, such as schools, homes, or shops, where they will operate as assistants, public relation agents, or companions. People are expected to willingly engage and collaborate with these robots to accomplish positive outcomes. To facilitate collaboration, robots need to comply with the behavioural and social norms used by humans in their daily interactions. One such behavioural norm is the expression of emotion through body language. Previous work on emotional body language synthesis for humanoid robots has been mainly focused on hand-coded design methods, often employing features extracted from human body language. However, the hand-coded design is cumbersome and results in a limited number of expressions with low variability. This limitation can be at the expense of user engagement since the robotic behaviours will appear repetitive and predictable, especially in long-term interaction. Furthermore, design approaches strictly based on human emotional body language might not transfer effectively on robots because of their simpler morphology. Finally, most previous work is using six or fewer basic emotion categories in the design and the evaluation phase of emotional expressions. This approach might result in lossy compression of the granularity in emotion expression. The current thesis presents a methodology for developing a complete framework of emotional body language generation for a humanoid robot, intending to address these three limitations. Our starting point is a small set of animations designed by professional animators with the robot morphology in mind. We conducted an initial user study to acquire reliable dimensional labels of valence and arousal for each animation. In the next step, we used the motion sequences from these animations to train a Variational Autoencoder, a deep learning model, to generate numerous new animations in an unsupervised setting. Finally, we extended the model to condition the generative process with valence and arousal attributes, and we conducted a user study to evaluate the interpretability of the animations in terms of valence, arousal, and dominance. The results indicate moderate to strong interpretability

    DeWave: Discrete EEG Waves Encoding for Brain Dynamics to Text Translation

    Full text link
    The translation of brain dynamics into natural language is pivotal for brain-computer interfaces (BCIs), a field that has seen substantial growth in recent years. With the swift advancement of large language models, such as ChatGPT, the need to bridge the gap between the brain and languages becomes increasingly pressing. Current methods, however, require eye-tracking fixations or event markers to segment brain dynamics into word-level features, which can restrict the practical application of these systems. These event markers may not be readily available or could be challenging to acquire during real-time inference, and the sequence of eye fixations may not align with the order of spoken words. To tackle these issues, we introduce a novel framework, DeWave, that integrates discrete encoding sequences into open-vocabulary EEG-to-text translation tasks. DeWave uses a quantized variational encoder to derive discrete codex encoding and align it with pre-trained language models. This discrete codex representation brings forth two advantages: 1) it alleviates the order mismatch between eye fixations and spoken words by introducing text-EEG contrastive alignment training, and 2) it minimizes the interference caused by individual differences in EEG waves through an invariant discrete codex. Our model surpasses the previous baseline (40.1 and 31.7) by 3.06% and 6.34%, respectively, achieving 41.35 BLEU-1 and 33.71 Rouge-F on the ZuCo Dataset. Furthermore, this work is the first to facilitate the translation of entire EEG signal periods without needing word-level order markers (e.g., eye fixations), scoring 20.5 BLEU-1 and 29.5 Rouge-1 on the ZuCo Dataset, respectively. Codes and the final paper will be public soon
    • …
    corecore