40 research outputs found

    Multi-modal post-editing of machine translation

    Get PDF
    As MT quality continues to improve, more and more translators switch from traditional translation from scratch to PE of MT output, which has been shown to save time and reduce errors. Instead of mainly generating text, translators are now asked to correct errors within otherwise helpful translation proposals, where repetitive MT errors make the process tiresome, while hard-to-spot errors make PE a cognitively demanding activity. Our contribution is three-fold: first, we explore whether interaction modalities other than mouse and keyboard could well support PE by creating and testing the MMPE translation environment. MMPE allows translators to cross out or hand-write text, drag and drop words for reordering, use spoken commands or hand gestures to manipulate text, or to combine any of these input modalities. Second, our interviews revealed that translators see value in automatically receiving additional translation support when a high CL is detected during PE. We therefore developed a sensor framework using a wide range of physiological and behavioral data to estimate perceived CL and tested it in three studies, showing that multi-modal, eye, heart, and skin measures can be used to make translation environments cognition-aware. Third, we present two multi-encoder Transformer architectures for APE and discuss how these can adapt MT output to a domain and thereby avoid correcting repetitive MT errors.Angesichts der stetig steigenden Qualität maschineller Übersetzungssysteme (MÜ) post-editieren (PE) immer mehr Übersetzer die MÜ-Ausgabe, was im Vergleich zur herkömmlichen Übersetzung Zeit spart und Fehler reduziert. Anstatt primär Text zu generieren, müssen Übersetzer nun Fehler in ansonsten hilfreichen Übersetzungsvorschlägen korrigieren. Dennoch bleibt die Arbeit durch wiederkehrende MÜ-Fehler mühsam und schwer zu erkennende Fehler fordern die Übersetzer kognitiv. Wir tragen auf drei Ebenen zur Verbesserung des PE bei: Erstens untersuchen wir, ob andere Interaktionsmodalitäten als Maus und Tastatur das PE unterstützen können, indem wir die Übersetzungsumgebung MMPE entwickeln und testen. MMPE ermöglicht es, Text handschriftlich, per Sprache oder über Handgesten zu verändern, Wörter per Drag & Drop neu anzuordnen oder all diese Eingabemodalitäten zu kombinieren. Zweitens stellen wir ein Sensor-Framework vor, das eine Vielzahl physiologischer und verhaltensbezogener Messwerte verwendet, um die kognitive Last (KL) abzuschätzen. In drei Studien konnten wir zeigen, dass multimodale Messung von Augen-, Herz- und Hautmerkmalen verwendet werden kann, um Übersetzungsumgebungen an die KL der Übersetzer anzupassen. Drittens stellen wir zwei Multi-Encoder-Transformer-Architekturen für das automatische Post-Editieren (APE) vor und erörtern, wie diese die MÜ-Ausgabe an eine Domäne anpassen und dadurch die Korrektur von sich wiederholenden MÜ-Fehlern vermeiden können.Deutsche Forschungsgemeinschaft (DFG), Projekt MMP

    VIDEO FOREGROUND LOCALIZATION FROM TRADITIONAL METHODS TO DEEP LEARNING

    Get PDF
    These days, detection of Visual Attention Regions (VAR), such as moving objects has become an integral part of many Computer Vision applications, viz. pattern recognition, object detection and classification, video surveillance, autonomous driving, human-machine interaction (HMI), and so forth. The moving object identification using bounding boxes has matured to the level of localizing the objects along their rigid borders and the process is called foreground localization (FGL). Over the decades, many image segmentation methodologies have been well studied, devised, and extended to suit the video FGL. Despite that, still, the problem of video foreground (FG) segmentation remains an intriguing task yet appealing due to its ill-posed nature and myriad of applications. Maintaining spatial and temporal coherence, particularly at object boundaries, persists challenging, and computationally burdensome. It even gets harder when the background possesses dynamic nature, like swaying tree branches or shimmering water body, and illumination variations, shadows cast by the moving objects, or when the video sequences have jittery frames caused by vibrating or unstable camera mounts on a surveillance post or moving robot. At the same time, in the analysis of traffic flow or human activity, the performance of an intelligent system substantially depends on its robustness of localizing the VAR, i.e., the FG. To this end, the natural question arises as what is the best way to deal with these challenges? Thus, the goal of this thesis is to investigate plausible real-time performant implementations from traditional approaches to modern-day deep learning (DL) models for FGL that can be applicable to many video content-aware applications (VCAA). It focuses mainly on improving existing methodologies through harnessing multimodal spatial and temporal cues for a delineated FGL. The first part of the dissertation is dedicated for enhancing conventional sample-based and Gaussian mixture model (GMM)-based video FGL using probability mass function (PMF), temporal median filtering, and fusing CIEDE2000 color similarity, color distortion, and illumination measures, and picking an appropriate adaptive threshold to extract the FG pixels. The subjective and objective evaluations are done to show the improvements over a number of similar conventional methods. The second part of the thesis focuses on exploiting and improving deep convolutional neural networks (DCNN) for the problem as mentioned earlier. Consequently, three models akin to encoder-decoder (EnDec) network are implemented with various innovative strategies to improve the quality of the FG segmentation. The strategies are not limited to double encoding - slow decoding feature learning, multi-view receptive field feature fusion, and incorporating spatiotemporal cues through long-shortterm memory (LSTM) units both in the subsampling and upsampling subnetworks. Experimental studies are carried out thoroughly on all conditions from baselines to challenging video sequences to prove the effectiveness of the proposed DCNNs. The analysis demonstrates that the architectural efficiency over other methods while quantitative and qualitative experiments show the competitive performance of the proposed models compared to the state-of-the-art

    Exploiting Spatio-Temporal Coherence for Video Object Detection in Robotics

    Get PDF
    This paper proposes a method to enhance video object detection for indoor environments in robotics. Concretely, it exploits knowledge about the camera motion between frames to propagate previously detected objects to successive frames. The proposal is rooted in the concepts of planar homography to propose regions of interest where to find objects, and recursive Bayesian filtering to integrate observations over time. The proposal is evaluated on six virtual, indoor environments, accounting for the detection of nine object classes over a total of ∼ 7k frames. Results show that our proposal improves the recall and the F1-score by a factor of 1.41 and 1.27, respectively, as well as it achieves a significant reduction of the object categorization entropy (58.8%) when compared to a two-stage video object detection method used as baseline, at the cost of small time overheads (120 ms) and precision loss (0.92).</p

    Learning disentangled representations of satellite image time series in a weakly supervised manner

    Get PDF
    Cette thèse se focalise sur l'apprentissage de représentations de séries temporelles d'images satellites via des méthodes d'apprentissage non supervisé. Le but principal est de créer une représentation qui capture l'information la plus pertinente de la série temporelle afin d'effectuer d'autres applications d'imagerie satellite. Cependant, l'extraction d'information à partir de la donnée satellite implique de nombreux défis. D'un côté, les modèles doivent traiter d'énormes volumes d'images fournis par les satellites. D'un autre côté, il est impossible pour les opérateurs humains d'étiqueter manuellement un tel volume d'images pour chaque tâche (par exemple, la classification, la segmentation, la détection de changement, etc.). Par conséquent, les méthodes d'apprentissage supervisé qui ont besoin des étiquettes ne peuvent pas être appliquées pour analyser la donnée satellite. Pour résoudre ce problème, des algorithmes d'apprentissage non supervisé ont été proposés pour apprendre la structure de la donnée au lieu d'apprendre une tâche particulière. L'apprentissage non supervisé est une approche puissante, car aucune étiquette n'est nécessaire et la connaissance acquise sur la donnée peut être transférée vers d'autres tâches permettant un apprentissage plus rapide avec moins d'étiquettes. Dans ce travail, on étudie le problème de l'apprentissage de représentations démêlées de séries temporelles d'images satellites. Le but consiste à créer une représentation partagée qui capture l'information spatiale de la série temporelle et une représentation exclusive qui capture l'information temporelle spécifique à chaque image. On présente les avantages de créer des représentations spatio-temporelles. Par exemple, l'information spatiale est utile pour effectuer la classification ou la segmentation d'images de manière invariante dans le temps tandis que l'information temporelle est utile pour la détection de changement. Pour ce faire, on analyse plusieurs modèles d'apprentissage non supervisé tels que l'auto-encodeur variationnel (VAE) et les réseaux antagonistes génératifs (GANs) ainsi que les extensions de ces modèles pour effectuer le démêlage des représentations. Considérant les résultats impressionnants qui ont été obtenus par les modèles génératifs et reconstructifs, on propose un nouveau modèle qui crée une représentation spatiale et une représentation temporelle de la donnée satellite. On montre que les représentations démêlées peuvent être utilisées pour effectuer plusieurs tâches de vision par ordinateur surpassant d'autres modèles de l'état de l'art. Cependant, nos expériences suggèrent que les modèles génératifs et reconstructifs présentent des inconvénients liés à la dimensionnalité de la représentation, à la complexité de l'architecture et au manque de garanties sur le démêlage. Pour surmonter ces limitations, on étudie une méthode récente basée sur l'estimation et la maximisation de l'informations mutuelle sans compter sur la reconstruction ou la génération d'image. On propose un nouveau modèle qui étend le principe de maximisation de l'information mutuelle pour démêler le domaine de représentation. En plus des expériences réalisées sur la donnée satellite, on montre que notre modèle est capable de traiter différents types de données en étant plus performant que les méthodes basées sur les GANs et les VAEs. De plus, on prouve que notre modèle demande moins de puissance de calcul et pourtant est plus efficace. Enfin, on montre que notre modèle est utile pour créer une représentation qui capture uniquement l'information de classe entre deux images appartenant à la même catégorie. Démêler la classe ou la catégorie d'une image des autres facteurs de variation permet de calculer la similarité entre pixels et effectuer la segmentation d'image d'une manière faiblement supervisée.This work focuses on learning data representations of satellite image time series via an unsupervised learning approach. The main goal is to enforce the data representation to capture the relevant information from the time series to perform other applications of satellite imagery. However, extracting information from satellite data involves many challenges since models need to deal with massive amounts of images provided by Earth observation satellites. Additionally, it is impossible for human operators to label such amount of images manually for each individual task (e.g. classification, segmentation, change detection, etc.). Therefore, we cannot use the supervised learning framework which achieves state-of-the-art results in many tasks.To address this problem, unsupervised learning algorithms have been proposed to learn the data structure instead of performing a specific task. Unsupervised learning is a powerful approach since no labels are required during training and the knowledge acquired can be transferred to other tasks enabling faster learning with few labels.In this work, we investigate the problem of learning disentangled representations of satellite image time series where a shared representation captures the spatial information across the images of the time series and an exclusive representation captures the temporal information which is specific to each image. We present the benefits of disentangling the spatio-temporal information of time series, e.g. the spatial information is useful to perform time-invariant image classification or segmentation while the knowledge about the temporal information is useful for change detection. To accomplish this, we analyze some of the most prevalent unsupervised learning models such as the variational autoencoder (VAE) and the generative adversarial networks (GANs) as well as the extensions of these models to perform representation disentanglement. Encouraged by the successful results achieved by generative and reconstructive models, we propose a novel framework to learn spatio-temporal representations of satellite data. We prove that the learned disentangled representations can be used to perform several computer vision tasks such as classification, segmentation, information retrieval and change detection outperforming other state-of-the-art models. Nevertheless, our experiments suggest that generative and reconstructive models present some drawbacks related to the dimensionality of the data representation, architecture complexity and the lack of disentanglement guarantees. In order to overcome these limitations, we explore a recent method based on mutual information estimation and maximization for representation learning without relying on image reconstruction or image generation. We propose a new model that extends the mutual information maximization principle to disentangle the representation domain into two parts. In addition to the experiments performed on satellite data, we show that our model is able to deal with different kinds of datasets outperforming the state-of-the-art methods based on GANs and VAEs. Furthermore, we show that our mutual information based model is less computationally demanding yet more effective. Finally, we show that our model is useful to create a data representation that only captures the class information between two images belonging to the same category. Disentangling the class or category of an image from other factors of variation provides a powerful tool to compute the similarity between pixels and perform image segmentation in a weakly-supervised manner

    Note Taking in the Digital Age – Towards a Ubiquitous Pen Interface

    Get PDF
    The cultural technique of writing helped humans to express, communicate, think, and memorize throughout history. With the advent of human-computer-interfaces, pens as command input for digital systems became popular. While current applications allow carrying out complex tasks with digital pens, they lack the ubiquity and directness of pen and paper. This dissertation models the note taking process in the context of scholarly work, motivated by an understanding of note taking that surpasses mere storage of knowledge. The results, together with qualitative empirical findings about contemporary scholarly workflows that alternate between the analog and the digital world, inspire a novel pen interface concept. This concept proposes the use of an ordinary pen and unmodified writing surfaces for interacting with digital systems. A technological investigation into how a camera-based system can connect physical ink strokes with digital handwriting processing delivers artificial neural network-based building blocks towards that goal. Using these components, the technological feasibility of in-air pen gestures for command input is explored. A proof-of-concept implementation of a prototype system reaches real-time performance and demonstrates distributed computing strategies for realizing the interface concept in an end-user setting

    Harnessing the Power of Generative Models for Mobile Continuous and Implicit Authentication

    Get PDF
    Authenticating a user's identity lies at the heart of securing any information system. A trade off exists currently between user experience and the level of security the system abides by. Using Continuous and Implicit Authentication a user's identity can be verified without any active participation, hence increasing the level of security, given the continuous verification aspect, as well as the user experience, given its implicit nature. This thesis studies using mobile devices inertial sensors data to identify unique movements and patterns that identify the owner of the device at all times. We implement, and evaluate approaches proposed in related works as well as novel approaches based on a variety of machine learning models, specifically a new kind of Auto Encoder (AE) named Variational Auto Encoder (VAE), relating to the generative models family. We evaluate numerous machine learning models for the anomaly detection or outlier detection case of spotting a malicious user, or an unauthorised entity currently using the smartphone system. We evaluate the results under conditions similar to other works as well as under conditions typically observed in real-world applications. We find that the shallow VAE is the best performer semi-supervised anomaly detector in our evaluations and hence the most suitable for the design proposed. The thesis concludes with recommendations for the enhancement of the system and the research body dedicated to the domain of Continuous and Implicit Authentication for mobile security

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered

    Novel Methods for Natural Language Modeling and Pretraining

    Get PDF
    This thesis is about modeling language sequences to achieve lower perplexity, better generation, and benefit downstream language tasks; specifically, this thesis addresses the importance of natural language features including the segmentation feature, lexical feature, and alignment feature. We present three new techniques that improve language sequence modeling with different language features. 1. Segment-Aware Language Modeling is a novel model architecture leveraging the text segementation feature for text sequence modeling. It encodes richer positional information for language modeling, by replacing the original token position encoding with a combined position encoding of paragraph, sentence, and token. By applying our approach to Transformer-XL, we train a new language model, Segatron-XL, that achieves a 6.6-7.8% relative reduction in perplexity. Additionally, BERT pretrained with our method -- SegaBERT -- outperforms BERT on general language understanding, sentence representation learning, and machine reading comprehension tasks. Furthermore, our SegaBERT-large model outperforms RoBERTa-large on zero-shot STS tasks. These experimental results demonstrate that our proposed Segatron works on both language models with relative position embeddings and pretrained language models with absolute position embeddings. 2. Hypernym-Instructed Language Modeling is a novel training method leveraging the lexical feature for rare word modeling. It maps words that have a common WordNet hypernym to the same class and trains large neural LMs by gradually annealing from predicting the class to token prediction during training. Class-based prediction leads to an implicit context aggregation for similar words and thus can improve generalization for rare words. Empirically, this curriculum learning strategy consistently reduces perplexity over various large, highly-performant state-of-the-art Transformer-based models on two datasets, WikiText-103 and ArXiv. Our analysis shows that the performance improvement is achieved without sacrificing performance on rare words. 3. Alignment-Aware Acoustic and Text Modeling is a novel pretraining method leveraging both the segmentation and alignment features for text-speech sequence modeling. It reconstructs masked acoustic signals with text input and acoustic-text alignment during training. In this way, the pretrained model can generate high quality of reconstructed spectrogram, which can be applied to the speech editing and new speaker TTS directly. Experiments show A3T outperforms SOTA models on speech editing and improves multi-speaker speech synthesis without the external speaker verification model

    Learning Transferable Knowledge Through Embedding Spaces

    Get PDF
    The unprecedented processing demand, posed by the explosion of big data, challenges researchers to design efficient and adaptive machine learning algorithms that do not require persistent retraining and avoid learning redundant information. Inspired from learning techniques of intelligent biological agents, identifying transferable knowledge across learning problems has been a significant research focus to improve machine learning algorithms. In this thesis, we address the challenges of knowledge transfer through embedding spaces that capture and store hierarchical knowledge. In the first part of the thesis, we focus on the problem of cross-domain knowledge transfer. We first address zero-shot image classification, where the goal is to identify images from unseen classes using semantic descriptions of these classes. We train two coupled dictionaries which align visual and semantic domains via an intermediate embedding space. We then extend this idea by training deep networks that match data distributions of two visual domains in a shared cross-domain embedding space. Our approach addresses both semi-supervised and unsupervised domain adaptation settings. In the second part of the thesis, we investigate the problem of cross-task knowledge transfer. Here, the goal is to identify relations and similarities of multiple machine learning tasks to improve performance across the tasks. We first address the problem of zero-shot learning in a lifelong machine learning setting, where the goal is to learn tasks with no data using high-level task descriptions. Our idea is to relate high-level task descriptors to the optimal task parameters through an embedding space. We then develop a method to overcome the problem of catastrophic forgetting within continual learning setting of deep neural networks by enforcing the tasks to share the same distribution in the embedding space. We further demonstrate that our model can address the challenges of domain adaptation in the continual learning setting. Finally, we consider the problem of cross-agent knowledge transfer in the third part of the thesis. We demonstrate that multiple lifelong machine learning agents can collaborate to increase individual performance by sharing learned knowledge in an embedding space without sharing private data through a shared embedding space. We demonstrate that despite major differences, problems within the above learning scenarios can be tackled through learning an intermediate embedding space that allows transferring knowledge effectively
    corecore