510 research outputs found

    Automated Testing of Software Upgrades for Android Systems

    Get PDF
    Apps’ pervasive role in our society motivates researchers to develop automated techniques ensuring dependability through testing. However, although App updates are frequent and software engineers would like to prioritize the testing of updated features, automated testing techniques verify entire Apps and thus waste resources. Further, most testing techniques can detect only crashing failures, necessitating visual inspection of outputs to detect functional failures, which is a costly task. Despite efforts to automatically derive oracles for functional failures, the effectiveness of existing approaches is limited. Therefore, instead of automating human tasks, it seems preferable to minimize what should be visually inspected by engineers. To address the problems above, in this dissertation, we propose approaches to maximize testing effectiveness while containing test execution time and human effort. First, we present ATUA (Automated Testing of Updates for Apps), a model-based approach that synthesizes App models with static analysis, integrates a dynamically refined state abstraction function, and combines complementary testing strategies, thus enabling ATUA to generate a small set of inputs that exercise only the code affected by updates. A large empirical evaluation conducted with 72 App versions belonging to nine popular Android Apps has shown that ATUA is more effective and less effort-intensive than state-of-the-art approaches when testing App updates. Second, we present CALM (Continuous Adaptation of Learned Models), an automated App testing approach that efficiently tests App updates by adapting App models learned when automatically testing previous App versions. CALM minimizes the number of App screens to be visualized by software testers while maximizing the percentage of updated methods and instructions exercised. Our empirical evaluation shows that CALM exercises a significantly higher proportion of updated methods and instructions than baselines for the same maximum number of App screens to be visually inspected. Further, in common update scenarios, where only a small fraction of methods are updated, CALM is even quicker to outperform all competing approaches more significantly. Finally, we minimize test oracle cost by defining strategies for selecting, for visual inspection, a subset of the App outputs. We assessed 26 strategies, relying on either code coverage or action effect, on Apps affected by functional faults confirmed by their developers. Our empirical evaluation has shown that our strategies have the potential to enable the identification of a large proportion of faults. By combining code coverage with action effect, it is possible to reduce oracle cost by about 41.2% while enabling engineers to detect all the faults exercised by test automation approaches

    Artificial Intelligence and International Conflict in Cyberspace

    Get PDF
    This edited volume explores how artificial intelligence (AI) is transforming international conflict in cyberspace. Over the past three decades, cyberspace developed into a crucial frontier and issue of international conflict. However, scholarly work on the relationship between AI and conflict in cyberspace has been produced along somewhat rigid disciplinary boundaries and an even more rigid sociotechnical divide – wherein technical and social scholarship are seldomly brought into a conversation. This is the first volume to address these themes through a comprehensive and cross-disciplinary approach. With the intent of exploring the question ‘what is at stake with the use of automation in international conflict in cyberspace through AI?’, the chapters in the volume focus on three broad themes, namely: (1) technical and operational, (2) strategic and geopolitical and (3) normative and legal. These also constitute the three parts in which the chapters of this volume are organised, although these thematic sections should not be considered as an analytical or a disciplinary demarcation

    実応用を志向した機械翻訳システムの設計と評価

    Get PDF
    Tohoku University博士(情報科学)thesi

    A review of natural language processing in contact centre automation

    Get PDF
    Contact centres have been highly valued by organizations for a long time. However, the COVID-19 pandemic has highlighted their critical importance in ensuring business continuity, economic activity, and quality customer support. The pandemic has led to an increase in customer inquiries related to payment extensions, cancellations, and stock inquiries, each with varying degrees of urgency. To address this challenge, organizations have taken the opportunity to re-evaluate the function of contact centres and explore innovative solutions. Next-generation platforms that incorporate machine learning techniques and natural language processing, such as self-service voice portals and chatbots, are being implemented to enhance customer service. These platforms offer robust features that equip customer agents with the necessary tools to provide exceptional customer support. Through an extensive review of existing literature, this paper aims to uncover research gaps and explore the advantages of transitioning to a contact centre that utilizes natural language solutions as the norm. Additionally, we will examine the major challenges faced by contact centre organizations and offer reco

    Natural Language Processing for Motivational Interviewing Counselling: Addressing Challenges in Resources, Benchmarking and Evaluation

    Get PDF
    Motivational interviewing (MI) is a counselling style often used in healthcare to improve patient health and quality of life by promoting positive behaviour changes. Natural language processing (NLP) has been explored for supporting MI use cases of insights/feedback generation and therapist training, such as automatically assigning behaviour labels to therapist/client utterances and generating possible therapist responses. Despite the progress of NLP for MI applications, significant challenges remain. The most prominent one is the lack of publicly available and annotated MI dialogue corpora due to privacy constraints. Consequently, there is also a lack of common benchmarks and poor reproducibility across studies. Furthermore, human evaluation for therapist response generation is expensive and difficult to scale due to its dependence on MI experts as evaluators. In this thesis, we address these challenges in 4 directions: low-resource NLP modelling, MI dialogue dataset creation, benchmark development for real-world applicable tasks, and laypeople-experts human evaluation study. First, we explore zero-shot binary empathy assessment at the utterance level. We experiment with a supervised approach that trains on heuristically constructed empathy vs. non-empathy contrast in non-therapy dialogues. While this approach has better performance than other models without empathy-aware training, it is still suboptimal and therefore highlights the need for a well-annotated MI dataset. Next, we create AnnoMI, the first publicly available dataset of expert-annotated MI dialogues. It contains MI conversations that demonstrate both high- and low-quality counselling, with extensive annotations by domain experts covering key MI attributes. We also conduct comprehensive analyses of the dataset. Then, we investigate two AnnoMI-based real-world applicable tasks: predicting current-turn therapist/client behaviour given the utterance, and forecasting next-turn therapist behaviour given the dialogue history. We find that language models (LMs) perform well on predicting therapist behaviours with good generalisability to new dialogue topics. However, LMs have suboptimal forecasting performance, which reflects therapists' flexibility where multiple optimal next-turn actions are possible. Lastly, we ask both laypeople and experts to evaluate the generation of a crucial type of therapist responses -- reflection -- on a key quality aspect: coherence and context-consistency. We find that laypeople are a viable alternative to experts, as laypeople show good agreement with each other and correlation with experts. We also find that a large LM generates mostly coherent and consistent reflections. Overall, the work of this thesis broadens access to NLP for MI significantly as well as presents a wide range of findings on related natural language understanding/generation tasks with a real-world focus. Thus, our contributions lay the groundwork for the broader NLP community to be more engaged in research for MI, which will ultimately improve the quality of life for recipients of MI counselling

    Real-time generation and adaptation of social companion robot behaviors

    Get PDF
    Social robots will be part of our future homes. They will assist us in everyday tasks, entertain us, and provide helpful advice. However, the technology still faces challenges that must be overcome to equip the machine with social competencies and make it a socially intelligent and accepted housemate. An essential skill of every social robot is verbal and non-verbal communication. In contrast to voice assistants, smartphones, and smart home technology, which are already part of many people's lives today, social robots have an embodiment that raises expectations towards the machine. Their anthropomorphic or zoomorphic appearance suggests they can communicate naturally with speech, gestures, or facial expressions and understand corresponding human behaviors. In addition, robots also need to consider individual users' preferences: everybody is shaped by their culture, social norms, and life experiences, resulting in different expectations towards communication with a robot. However, robots do not have human intuition - they must be equipped with the corresponding algorithmic solutions to these problems. This thesis investigates the use of reinforcement learning to adapt the robot's verbal and non-verbal communication to the user's needs and preferences. Such non-functional adaptation of the robot's behaviors primarily aims to improve the user experience and the robot's perceived social intelligence. The literature has not yet provided a holistic view of the overall challenge: real-time adaptation requires control over the robot's multimodal behavior generation, an understanding of human feedback, and an algorithmic basis for machine learning. Thus, this thesis develops a conceptual framework for designing real-time non-functional social robot behavior adaptation with reinforcement learning. It provides a higher-level view from the system designer's perspective and guidance from the start to the end. It illustrates the process of modeling, simulating, and evaluating such adaptation processes. Specifically, it guides the integration of human feedback and social signals to equip the machine with social awareness. The conceptual framework is put into practice for several use cases, resulting in technical proofs of concept and research prototypes. They are evaluated in the lab and in in-situ studies. These approaches address typical activities in domestic environments, focussing on the robot's expression of personality, persona, politeness, and humor. Within this scope, the robot adapts its spoken utterances, prosody, and animations based on human explicit or implicit feedback.Soziale Roboter werden Teil unseres zukünftigen Zuhauses sein. Sie werden uns bei alltäglichen Aufgaben unterstützen, uns unterhalten und uns mit hilfreichen Ratschlägen versorgen. Noch gibt es allerdings technische Herausforderungen, die zunächst überwunden werden müssen, um die Maschine mit sozialen Kompetenzen auszustatten und zu einem sozial intelligenten und akzeptierten Mitbewohner zu machen. Eine wesentliche Fähigkeit eines jeden sozialen Roboters ist die verbale und nonverbale Kommunikation. Im Gegensatz zu Sprachassistenten, Smartphones und Smart-Home-Technologien, die bereits heute Teil des Lebens vieler Menschen sind, haben soziale Roboter eine Verkörperung, die Erwartungen an die Maschine weckt. Ihr anthropomorphes oder zoomorphes Aussehen legt nahe, dass sie in der Lage sind, auf natürliche Weise mit Sprache, Gestik oder Mimik zu kommunizieren, aber auch entsprechende menschliche Kommunikation zu verstehen. Darüber hinaus müssen Roboter auch die individuellen Vorlieben der Benutzer berücksichtigen. So ist jeder Mensch von seiner Kultur, sozialen Normen und eigenen Lebenserfahrungen geprägt, was zu unterschiedlichen Erwartungen an die Kommunikation mit einem Roboter führt. Roboter haben jedoch keine menschliche Intuition - sie müssen mit entsprechenden Algorithmen für diese Probleme ausgestattet werden. In dieser Arbeit wird der Einsatz von bestärkendem Lernen untersucht, um die verbale und nonverbale Kommunikation des Roboters an die Bedürfnisse und Vorlieben des Benutzers anzupassen. Eine solche nicht-funktionale Anpassung des Roboterverhaltens zielt in erster Linie darauf ab, das Benutzererlebnis und die wahrgenommene soziale Intelligenz des Roboters zu verbessern. Die Literatur bietet bisher keine ganzheitliche Sicht auf diese Herausforderung: Echtzeitanpassung erfordert die Kontrolle über die multimodale Verhaltenserzeugung des Roboters, ein Verständnis des menschlichen Feedbacks und eine algorithmische Basis für maschinelles Lernen. Daher wird in dieser Arbeit ein konzeptioneller Rahmen für die Gestaltung von nicht-funktionaler Anpassung der Kommunikation sozialer Roboter mit bestärkendem Lernen entwickelt. Er bietet eine übergeordnete Sichtweise aus der Perspektive des Systemdesigners und eine Anleitung vom Anfang bis zum Ende. Er veranschaulicht den Prozess der Modellierung, Simulation und Evaluierung solcher Anpassungsprozesse. Insbesondere wird auf die Integration von menschlichem Feedback und sozialen Signalen eingegangen, um die Maschine mit sozialem Bewusstsein auszustatten. Der konzeptionelle Rahmen wird für mehrere Anwendungsfälle in die Praxis umgesetzt, was zu technischen Konzeptnachweisen und Forschungsprototypen führt, die in Labor- und In-situ-Studien evaluiert werden. Diese Ansätze befassen sich mit typischen Aktivitäten in häuslichen Umgebungen, wobei der Schwerpunkt auf dem Ausdruck der Persönlichkeit, dem Persona, der Höflichkeit und dem Humor des Roboters liegt. In diesem Rahmen passt der Roboter seine Sprache, Prosodie, und Animationen auf Basis expliziten oder impliziten menschlichen Feedbacks an

    Technology Assessment in a Globalized World

    Get PDF
    This open access book explores the relevance of the concept of technology assessment (TA) on an international and global level. Technologies play a key role in addressing global challenges such as climate change, population aging, digitization, and health. At the same time, their use increases the need for coordinated action and governance at the global level in the field of science, technology and innovation (STI). Featuring case studies on STI fields such as energy, biotechnology, artificial intelligence, and health technology, as well as TA activities at the national and international levels, this book reflects on the challenges and opportunities of global technology governance. It also provides an in-depth discussion of current governmental STI cultures and systems, societal expectations, and the policy priorities needed to achieve coordinated and effective STI intervention in policymaking and public debate at the global level. Lastly, the book promotes the establishment of a forum for a truly global dialogue of TA practitioners, fostering the articulation of their needs, knowledge and perspectives

    End-to-End Simultaneous Speech Translation

    Get PDF
    Speech translation is the task of translating speech in one language to text or speech in another language, while simultaneous translation aims at lower translation latency by starting the translation before the speaker finishes a sentence. The combination of the two, simultaneous speech translation, can be applied in low latency scenarios such as live video caption translation and real-time interpretation. This thesis will focus on an end-to-end or direct approach for simultaneous speech translation. We first define the task of simultaneous speech translation, including the challenges of the task and its evaluation metrics. We then progressly introduce our contributions to tackle the challenges. First, we proposed a novel simultaneous translation policy, mono- tonic multihead attention, for transformer models on text-to-text translation. Second, we investigate the issues and potential solutions when adapting text-to-text simultaneous policies to end-to-end speech-to-text translation models. Third, we introduced the augmented memory transformer encoder for simultaneous speech-to-text translation models for better computation efficiency. Fourth, we explore a direct simultaneous speech translation with variational monotonic multihead attention policy, based on recent speech-to-unit models. At the end, we provide some directions for potential future research

    End-to-end Task-oriented Dialogue: A Survey of Tasks, Methods, and Future Directions

    Full text link
    End-to-end task-oriented dialogue (EToD) can directly generate responses in an end-to-end fashion without modular training, which attracts escalating popularity. The advancement of deep neural networks, especially the successful use of large pre-trained models, has further led to significant progress in EToD research in recent years. In this paper, we present a thorough review and provide a unified perspective to summarize existing approaches as well as recent trends to advance the development of EToD research. The contributions of this paper can be summarized: (1) \textbf{\textit{First survey}}: to our knowledge, we take the first step to present a thorough survey of this research field; (2) \textbf{\textit{New taxonomy}}: we first introduce a unified perspective for EToD, including (i) \textit{Modularly EToD} and (ii) \textit{Fully EToD}; (3) \textbf{\textit{New Frontiers}}: we discuss some potential frontier areas as well as the corresponding challenges, hoping to spur breakthrough research in EToD field; (4) \textbf{\textit{Abundant resources}}: we build a public website\footnote{We collect the related papers, baseline projects, and leaderboards for the community at \url{https://etods.net/}.}, where EToD researchers could directly access the recent progress. We hope this work can serve as a thorough reference for the EToD research community.Comment: Accepted at EMNLP202

    Structure-aware narrative summarization from multiple views

    Get PDF
    Narratives, such as movies and TV shows, provide a testbed for addressing a variety of challenges in the field of artificial intelligence. They are examples of complex stories where characters and events interact in many ways. Inferring what is happening in a narrative requires modeling long-range dependencies between events, understanding commonsense knowledge and accounting for non-linearities in the presentation of the story. Moreover, narratives are usually long (i.e., there are hundreds of pages in a screenplay and thousands of frames in a video) and cannot be easily processed by standard neural architectures. Movies and TV episodes also include information from multiple sources (i.e., video, audio, text) that are complementary to inferring high-level events and their interactions. Finally, creating large-scale multimodal datasets with narratives containing long videos and aligned textual data is challenging, resulting in small datasets that require data efficient approaches. Most prior work that analyzes narratives does not consider the above challenges all at once. In most cases, text-only approaches focus on full-length narratives with complex semantics and address tasks such as question-answering and summarization, or multimodal approaches are limited to short videos with simpler semantics (e.g., isolated actions and local interactions). In this thesis, we combine these two different directions in addressing narrative summarization. We use all input modalities (i.e., video, audio, text), consider full-length narratives and perform the task of narrative summarization both in a video-to-video setting (i.e., video summarization, trailer generation) and a video-to-text setting (i.e., multimodal abstractive summarization). We hypothesize that information about the narrative structure of movies and TVepisodes can facilitate summarizing them. We introduce the task of Turning Point identification and provide a corresponding dataset called TRIPOD as a means of analyzing the narrative structure of movies. According to screenwriting theory, turning points (e.g., change of plans, major setback, climax) are crucial narrative moments within a movie or TV episode: they define the plot structure and determine its progression and thematic units. We validate that narrative structure contributes to extractive screenplay summarization by testing our hypothesis on a dataset containing TV episodes and summary-specific labels. We further hypothesize that movies should not be viewed as a sequence of scenes from a screenplay or shots from a video and instead be modelled as sparse graphs, where nodes are scenes or shots and edges denote strong semantic relationships between them. We utilize multimodal information for creating movie graphs in the latent space, and find that both graph-related and multimodal information help contextualization and boost performance on extractive summarization. Moving one step further, we also address the task of trailer moment identification, which can be viewed as a specific instiatiation of narrative summarization. We decompose this task, which is challenging and subjective, into two simpler ones: narrativestructure identification, defined again by turning points, and sentiment prediction. We propose a graph-based unsupervised algorithm that uses interpretable criteria for retrieving trailer shots and convert it into an interactive tool with a human in the loop for trailer creation. Semi-automatic trailer shot selection exhibits comparable performance to fully manual selection according to human judges, while minimizing processing time. After identifying salient content in narratives, we next attempt to produce abstractive textual summaries (i.e., video-to-text). We hypothesize that multimodal information is directly important for generating textual summaries, apart from contributing to content selection. For that, we propose a parameter efficient way for incorporating multimodal information into a pre-trained textual summarizer, while training only 3.8% of model parameters, and demonstrate the importance of multimodal information for generating high-quality and factual summaries. The findings of this thesis underline the need to focus on realistic and multimodal settings when addressing narrative analysis and generation tasks
    corecore