78 research outputs found

    An Outlook into the Future of Egocentric Vision

    Full text link
    What will the future be? We wonder! In this survey, we explore the gap between current research in egocentric vision and the ever-anticipated future, where wearable computing, with outward facing cameras and digital overlays, is expected to be integrated in our every day lives. To understand this gap, the article starts by envisaging the future through character-based stories, showcasing through examples the limitations of current technology. We then provide a mapping between this future and previously defined research tasks. For each task, we survey its seminal works, current state-of-the-art methodologies and available datasets, then reflect on shortcomings that limit its applicability to future research. Note that this survey focuses on software models for egocentric vision, independent of any specific hardware. The paper concludes with recommendations for areas of immediate explorations so as to unlock our path to the future always-on, personalised and life-enhancing egocentric vision.Comment: We invite comments, suggestions and corrections here: https://openreview.net/forum?id=V3974SUk1

    UniSumm and SummZoo: Unified Model and Diverse Benchmark for Few-Shot Summarization

    Full text link
    The high annotation costs and diverse demands of various summarization tasks motivate the development of few-shot summarization. However, despite the emergence of many summarization tasks and datasets, the current training paradigm for few-shot summarization systems ignores potentially shareable knowledge in heterogeneous datasets. To this end, we propose \textsc{UniSumm}, a unified few-shot summarization model pre-trained with multiple summarization tasks and can be prefix-tuned to excel at any few-shot summarization task. Meanwhile, to better evaluate few-shot summarizers, under the principles of diversity and robustness, we assemble and release a new benchmark \textsc{SummZoo}. It consists of 88 summarization tasks with multiple sets of few-shot samples for each task, covering diverse domains. Experimental results and analysis show that \textsc{UniSumm} outperforms strong baselines by a large margin across all sub-tasks in \textsc{SummZoo} under both automatic and human evaluations and achieves comparable results in human evaluation compared with a GPT-3.5 model.Comment: ACL2023 main conferenc

    Natural Language Processing: Emerging Neural Approaches and Applications

    Get PDF
    This Special Issue highlights the most recent research being carried out in the NLP field to discuss relative open issues, with a particular focus on both emerging approaches for language learning, understanding, production, and grounding interactively or autonomously from data in cognitive and neural systems, as well as on their potential or real applications in different domains

    Exploratory search over semi-structured documents

    Get PDF

    Memorization for Good: Encryption with Autoregressive Language Models

    Full text link
    Over-parameterized neural language models (LMs) can memorize and recite long sequences of training data. While such memorization is normally associated with undesired properties such as overfitting and information leaking, our work casts memorization as an unexplored capability of LMs. We propose the first symmetric encryption algorithm with autoregressive language models (SELM). We show that autoregressive LMs can encode arbitrary data into a compact real-valued vector (i.e., encryption) and then losslessly decode the vector to the original message (i.e., decryption) via random subspace optimization and greedy decoding. While SELM is not amenable to conventional cryptanalysis, we investigate its security through a novel empirical variant of the classic IND-CPA (indistinguishability under chosen-plaintext attack) game and show promising results on security. Our code and datasets are available at https://github.com/OSU-NLP-Group/SELM.Comment: Main text: 9 pages, 4 figures, 1 table. Work-in-progress. Project website at https://samuelstevens.me/research/encryption

    A personality aware recommendation system

    Full text link
    Les systèmes de recommandation conversationnels (CRSs) sont des systèmes qui fournissent des recommandations personnalisées par le biais d’une session de dialogue en langage naturel avec les utilisateurs. Contrairement aux systèmes de recommandation traditionnels qui ne prennent comme vérité de base que les préférences anciennes des utilisateurs, les CRS impliquent aussi les préférences actuelles des utilisateurs durant la conversation. Des recherches récentes montrent que la compréhension de la signification contextuelle des préférences des utilisateurs et des dialogues peut améliorer de manière significative les performances du système de recommandation. Des chercheurs ont également montré un lien fort entre les traits de personnalité des utilisateurs et les systèmes de recommandation. La personnalité et les préférences sont des variables essentielles en sciences sociales. Elles décrivent les différences entre les personnes, que ce soit au niveau individuel ou collectif. Les approches récentes de recommandation basées sur la personnalité sont des systèmes non conversationnels. Par conséquent, il est extrêmement important de détecter et d’utiliser les traits de personnalité des individus dans les systèmes conversationnels afin d’assurer une performance de recommandation et de dialogue plus personnalisée. Pour ce faire, ce travail propose un système de recommandation conversationnel sensible à la personnalité qui est basé sur des modules qui assurent une session de dialogue et recommandation personnalisée en utilisant les traits de personnalité des utilisateurs. Nous proposons également une nouvelle approche de détection de la personnalité, qui est un modèle de langage spécifique au contexte pour détecter les traits des individus en utilisant leurs données publiées sur les réseaux sociaux. Les résultats montrent que notre système proposé a surpassé les approches existantes dans différentes mesures.A Conversational Recommendation System (CRS) is a system that provides personalized recommendations through a session of natural language dialogue turns with users. Unlike traditional one-shot recommendation systems, which only assume the user’s previous preferences as the ground truth, CRS uses both previous and current user preferences. Recent research shows that understanding the contextual meaning of user preferences and dialogue turns can significantly improve recommendation performance. It also shows a strong link between users’ personality traits and recommendation systems. Personality and preferences are essential variables in computational sociology and social science. They describe the differences between people, both at the individual and collective level. Recent personality-based recommendation approaches are traditional one-shot systems, or “non conversational systems”. Therefore, there is a significant need to detect and employ individuals’ personality traits within the CRS paradigm to ensure a better and more personalized dialogue recommendation performance. Driven by the aforementioned facts, this study proposes a modularized, personality- aware CRS that ensures a personalized dialogue recommendation session using the users’ personality traits. We also propose a novel personality detection approach, which is a context-specific language model for detecting individuals’ personality traits using their social media data. The goal is to create a personality-aware and topic-guided CRS model that performs better than the standard CRS models. Experimental results show that our personality-aware conversation recommendation system has outperformed state-of-the-art approaches in different considered metrics on the topic-guided conversation recommendation dataset

    WiFi-Based Human Activity Recognition Using Attention-Based BiLSTM

    Get PDF
    Recently, significant efforts have been made to explore human activity recognition (HAR) techniques that use information gathered by existing indoor wireless infrastructures through WiFi signals without demanding the monitored subject to carry a dedicated device. The key intuition is that different activities introduce different multi-paths in WiFi signals and generate different patterns in the time series of channel state information (CSI). In this paper, we propose and evaluate a full pipeline for a CSI-based human activity recognition framework for 12 activities in three different spatial environments using two deep learning models: ABiLSTM and CNN-ABiLSTM. Evaluation experiments have demonstrated that the proposed models outperform state-of-the-art models. Also, the experiments show that the proposed models can be applied to other environments with different configurations, albeit with some caveats. The proposed ABiLSTM model achieves an overall accuracy of 94.03%, 91.96%, and 92.59% across the 3 target environments. While the proposed CNN-ABiLSTM model reaches an accuracy of 98.54%, 94.25% and 95.09% across those same environments

    A Comprehensive Overview of Large Language Models

    Full text link
    Large Language Models (LLMs) have shown excellent generalization capabilities that have led to the development of numerous models. These models propose various new architectures, tweaking existing architectures with refined training strategies, increasing context length, using high-quality training data, and increasing training time to outperform baselines. Analyzing new developments is crucial for identifying changes that enhance training stability and improve generalization in LLMs. This survey paper comprehensively analyses the LLMs architectures and their categorization, training strategies, training datasets, and performance evaluations and discusses future research directions. Moreover, the paper also discusses the basic building blocks and concepts behind LLMs, followed by a complete overview of LLMs, including their important features and functions. Finally, the paper summarizes significant findings from LLM research and consolidates essential architectural and training strategies for developing advanced LLMs. Given the continuous advancements in LLMs, we intend to regularly update this paper by incorporating new sections and featuring the latest LLM models

    Tracking the Temporal-Evolution of Supernova Bubbles in Numerical Simulations

    Get PDF
    The study of low-dimensional, noisy manifolds embedded in a higher dimensional space has been extremely useful in many applications, from the chemical analysis of multi-phase flows to simulations of galactic mergers. Building a probabilistic model of the manifolds has helped in describing their essential properties and how they vary in space. However, when the manifold is evolving through time, a joint spatio-temporal modelling is needed, in order to fully comprehend its nature. We propose a first-order Markovian process that propagates the spatial probabilistic model of a manifold at fixed time, to its adjacent temporal stages. The proposed methodology is demonstrated using a particle simulation of an interacting dwarf galaxy to describe the evolution of a cavity generated by a Supernov
    • …
    corecore