54 research outputs found

    Real-time generation and adaptation of social companion robot behaviors

    Get PDF
    Social robots will be part of our future homes. They will assist us in everyday tasks, entertain us, and provide helpful advice. However, the technology still faces challenges that must be overcome to equip the machine with social competencies and make it a socially intelligent and accepted housemate. An essential skill of every social robot is verbal and non-verbal communication. In contrast to voice assistants, smartphones, and smart home technology, which are already part of many people's lives today, social robots have an embodiment that raises expectations towards the machine. Their anthropomorphic or zoomorphic appearance suggests they can communicate naturally with speech, gestures, or facial expressions and understand corresponding human behaviors. In addition, robots also need to consider individual users' preferences: everybody is shaped by their culture, social norms, and life experiences, resulting in different expectations towards communication with a robot. However, robots do not have human intuition - they must be equipped with the corresponding algorithmic solutions to these problems. This thesis investigates the use of reinforcement learning to adapt the robot's verbal and non-verbal communication to the user's needs and preferences. Such non-functional adaptation of the robot's behaviors primarily aims to improve the user experience and the robot's perceived social intelligence. The literature has not yet provided a holistic view of the overall challenge: real-time adaptation requires control over the robot's multimodal behavior generation, an understanding of human feedback, and an algorithmic basis for machine learning. Thus, this thesis develops a conceptual framework for designing real-time non-functional social robot behavior adaptation with reinforcement learning. It provides a higher-level view from the system designer's perspective and guidance from the start to the end. It illustrates the process of modeling, simulating, and evaluating such adaptation processes. Specifically, it guides the integration of human feedback and social signals to equip the machine with social awareness. The conceptual framework is put into practice for several use cases, resulting in technical proofs of concept and research prototypes. They are evaluated in the lab and in in-situ studies. These approaches address typical activities in domestic environments, focussing on the robot's expression of personality, persona, politeness, and humor. Within this scope, the robot adapts its spoken utterances, prosody, and animations based on human explicit or implicit feedback.Soziale Roboter werden Teil unseres zukünftigen Zuhauses sein. Sie werden uns bei alltäglichen Aufgaben unterstützen, uns unterhalten und uns mit hilfreichen Ratschlägen versorgen. Noch gibt es allerdings technische Herausforderungen, die zunächst überwunden werden müssen, um die Maschine mit sozialen Kompetenzen auszustatten und zu einem sozial intelligenten und akzeptierten Mitbewohner zu machen. Eine wesentliche Fähigkeit eines jeden sozialen Roboters ist die verbale und nonverbale Kommunikation. Im Gegensatz zu Sprachassistenten, Smartphones und Smart-Home-Technologien, die bereits heute Teil des Lebens vieler Menschen sind, haben soziale Roboter eine Verkörperung, die Erwartungen an die Maschine weckt. Ihr anthropomorphes oder zoomorphes Aussehen legt nahe, dass sie in der Lage sind, auf natürliche Weise mit Sprache, Gestik oder Mimik zu kommunizieren, aber auch entsprechende menschliche Kommunikation zu verstehen. Darüber hinaus müssen Roboter auch die individuellen Vorlieben der Benutzer berücksichtigen. So ist jeder Mensch von seiner Kultur, sozialen Normen und eigenen Lebenserfahrungen geprägt, was zu unterschiedlichen Erwartungen an die Kommunikation mit einem Roboter führt. Roboter haben jedoch keine menschliche Intuition - sie müssen mit entsprechenden Algorithmen für diese Probleme ausgestattet werden. In dieser Arbeit wird der Einsatz von bestärkendem Lernen untersucht, um die verbale und nonverbale Kommunikation des Roboters an die Bedürfnisse und Vorlieben des Benutzers anzupassen. Eine solche nicht-funktionale Anpassung des Roboterverhaltens zielt in erster Linie darauf ab, das Benutzererlebnis und die wahrgenommene soziale Intelligenz des Roboters zu verbessern. Die Literatur bietet bisher keine ganzheitliche Sicht auf diese Herausforderung: Echtzeitanpassung erfordert die Kontrolle über die multimodale Verhaltenserzeugung des Roboters, ein Verständnis des menschlichen Feedbacks und eine algorithmische Basis für maschinelles Lernen. Daher wird in dieser Arbeit ein konzeptioneller Rahmen für die Gestaltung von nicht-funktionaler Anpassung der Kommunikation sozialer Roboter mit bestärkendem Lernen entwickelt. Er bietet eine übergeordnete Sichtweise aus der Perspektive des Systemdesigners und eine Anleitung vom Anfang bis zum Ende. Er veranschaulicht den Prozess der Modellierung, Simulation und Evaluierung solcher Anpassungsprozesse. Insbesondere wird auf die Integration von menschlichem Feedback und sozialen Signalen eingegangen, um die Maschine mit sozialem Bewusstsein auszustatten. Der konzeptionelle Rahmen wird für mehrere Anwendungsfälle in die Praxis umgesetzt, was zu technischen Konzeptnachweisen und Forschungsprototypen führt, die in Labor- und In-situ-Studien evaluiert werden. Diese Ansätze befassen sich mit typischen Aktivitäten in häuslichen Umgebungen, wobei der Schwerpunkt auf dem Ausdruck der Persönlichkeit, dem Persona, der Höflichkeit und dem Humor des Roboters liegt. In diesem Rahmen passt der Roboter seine Sprache, Prosodie, und Animationen auf Basis expliziten oder impliziten menschlichen Feedbacks an

    Combating Fake News: A Gravity Well Simulation to Model Echo Chamber Formation In Social Media

    Get PDF
    Fake news has become a serious concern as distributing misinformation has become easier and more impactful. A solution is critically required. One solution is to ban fake news, but that approach could create more problems than it solves, and would also be problematic from the beginning, as it must first be identified to be banned. We initially propose a method to automatically recognize suspected fake news, and to provide news consumers with more information as to its veracity. We suggest that fake news is comprised of two components: premises and misleading content. Fake news can be condensed down to a collection of premises, which may or may not be true, and to various forms of misleading material, including biased arguments and language, misdirection, and manipulation. Misleading content can then be exposed. While valuable, this framework’s utility may be limited by artificial intelligence, which can be used to alter fake news strategies at a rate exceeding the ability to update the framework. Therefore, we propose a model for identifying echo chambers, which are widely reported to be havens for fake news producers and consumers. We simulate a social media interest group as a gravity well, through which we identify the online groups postured to become echo chambers, and thus a source for fake news consumption and replication. This echo chamber model is supported by three pillars related to the social media group: technology employed, topic explored, and confirmation bias of group members. The model is validated by modeling and analyzing 19 subreddits on the Reddit social media platform. Contributions include a working definition for fake news, a framework for recognizing fake news, a generic model for social media echo chambers including three pillars central to echo chamber formation, and a gravity well simulation for social media groups, implemented for 19 subreddits

    Implementing Multi Agent Systems (MAS)-based trust and reputation in smart IoT environments : A thesis submitted in partial fulfilment of the requirements for the Degree of Doctor of Philosophy at Lincoln University

    Get PDF
    The Internet of Things (IoT) provides advanced services by interconnecting a huge number of heterogeneous smart things (virtual or physical devices) through existing interoperable information and communication technologies. As IoT devices become more intelligent, they will have the ability to communicate and cooperate with each other. In doing so, enormous amount of sensitive data will flow within the network such as a credit card information, medical data, factory details, pictures and videos. With sensitive data flowing through the network, privacy becomes one of most important issues facing IoT. Studies of data sensitivity and privacy indicate the importance of evaluating the trustworthiness of IoT participants to maximize the satisfaction and the performance of the IoT applications. It is also important to maintain successful collaboration between the devices deployed in the network and ensure all devices operate in a trustworthy manner. This research aims to determine: How to select the best service provider in an IoT environment based on the trustworthiness and the reputation of the service provider? To achieve this, we proposed an IoT agent-based decentralized trust and reputation model IoT-CADM (Comprehensive Agent-based Decision-making Model for IoT) to select the best service providers for a particular service based on multi-context quality of services. IoT-CADM as a novel trust and reputation model, is developed for the smart multi-agent IoT environment to gather information from entities and score them using a new trust and reputation scoring mechanism. IoT-CADM aims to ensure that the service consumers are serviced by the best service providers in the IoT environment which in turn maximizes the service consumers’ satisfaction, which lead the IoT entities to operate and make-decisions on behalf of its owner in a trustworthy manner. To evaluate the performance of the proposed model against some other well-known models like ReGreT, SIoT, and R-D-C, we implemented a scenario based on the SIPOC Supply Chain approach developed using an agent development framework called JADE. This research used the TOPSIS approach to compare and rank the performance of these models based on different parameters that have been chosen carefully for fair comparison. The TOPSIS result confirmed that the proposed IoT-CADM has the highest performance. In addition, the model can be tuned to its parameters weight to adapt to varying scenarios in honest and dishonest agents’ environments

    Exploiting Group Structures to Infer Social Interactions From Videos

    Get PDF
    In this thesis, we consider the task of inferring the social interactions between humans by analyzing multi-modal data. Specifically, we attempt to solve some of the problems in interaction analysis, such as long-term deception detection, political deception detection, and impression prediction. In this work, we emphasize the importance of using knowledge about the group structure of the analyzed interactions. Previous works on the matter mostly neglected this aspect and analyzed a single subject at a time. Using the new Resistance dataset, collected by our collaborators, we approach the problem of long-term deception detection by designing a class of histogram-based features and a novel class of meta-features we callLiarRank. We develop a LiarOrNot model to identify spies in Resistance videos. We achieve AUCs of over 0.70 outperforming our baselines by 3% and human judges by 12%. For the problem of political deception, we first collect a dataset of videos and transcripts of 76 politicians from 18 countries making truthful and deceptive statements. We call it the Global Political Deception Dataset. We then show how to analyze the statements in a broader context by building a Video-Article-Topic graph. From this graph, we create a novel class of features called Deception Score that captures how controversial each topic is and how it affects the truthfulness of each statement. We show that our approach achieves 0.775 AUC outperforming competing baselines. Finally, we use the Resistance data to solve the problem of dyadic impression prediction. Our proposed Dyadic Impression Prediction System (DIPS) contains four major innovations: a novel class of features called emotion ranks, sign imbalance features derived from signed graphs theory, a novel method to align the facial expressions of subjects, and finally, we propose the concept of a multilayered stochastic network we call Temporal Delayed Network. Our DIPS architecture beats eight baselines from the literature, yielding statistically significant improvements of 19.9-30.8% in AUC

    Deception

    Get PDF

    TOWARDS A HOLISTIC RISK MODEL FOR SAFEGUARDING THE PHARMACEUTICAL SUPPLY CHAIN: CAPTURING THE HUMAN-INDUCED RISK TO DRUG QUALITY

    Get PDF
    Counterfeit, adulterated, and misbranded medicines in the pharmaceutical supply chain (PSC) are a critical problem. Regulators charged with safeguarding the supply chain are facing shrinking resources for inspections while concurrently facing increasing demands posed by new drug products being manufactured at more sites in the US and abroad. To mitigate risk, the University of Kentucky (UK) Central Pharmacy Drug Quality Study (DQS) tests injectable drugs dispensed within the UK hospital. Using FT-NIR spectrometry coupled with machine learning techniques the team identifies and flags potentially contaminated drugs for further testing and possible removal from the pharmacy. Teams like the DQS are always working with limited equipment, time, and staffing resources. Scanning every vial immediately before use is infeasible and drugs must be prioritized for analysis. A risk scoring system coupled with batch sampling techniques is currently used in the DQS. However, a risk scoring system only allows the team to know about the risks to the PSC today. It doesn’t let us predict what the risks will be in the future. To begin bridging this gap in predictive modeling capabilities the authors assert that models must incorporate the human element. A sister project to the DQS, the Drug Quality Game (DGC), enables humans and all of their unpredictability to be inserted into a virtual PSC. The DQG approach was adopted as a means of capturing human creativity, imagination, and problem-solving skills. Current methods of prioritizing drug scans rely heavily on drug cost, sole-source status, warning letters, equipment and material specifications. However, humans, not machines, commit fraud. Given that even one defective drug product could have catastrophic consequences this project will improve risk-based modeling by equipping future models to identify and incorporate human-induced risks, expanding the overall landscape of risk-based modeling. This exploratory study tested the following hypotheses (1) a useful game system able to simulate real-life humans and their actions in a pharmaceutical manufacturing process can be designed and deployed, (2) there are variables in the game that are predictive of human-induced risks to the PSC, and (3) the game can identify ways in which bad actors can “game the system” (GTS) to produce counterfeit, adulterated, and misbranded drugs. A commercial-off-the-shelf (COTS) game, BigPharma, was used as the basis of a game system able to simulate the human subjects and their actions in a pharmaceutical manufacturing process. BigPharma was selected as it provides a low-cost, time-efficient virtual environment that captures the major elements of a pharmaceutical business- research, marketing, and manufacturing/processing. Running Big Pharma with a Python shell enables researchers to implement specific GxP-related tasks (Good x Practice, where x=Manufacturing, Clinical, Research, etc.) not provided in the COTS BigPharma game. Results from players\u27 interaction with the Python shell/Big Pharma environment suggest that the game can identify both variables predictive of human-induced risks to the PSC and ways in which bad actors may GTS. For example, company profitability emerged as one variable predictive of successful GTS. Player\u27s unethical in-game techniques matched well with observations seen within the DQS

    Agent-based models of long-distance trading societies

    Get PDF
    Studying historical trading societies helps us to identify the institutions (e.g. rules) and characteristics that lead to their success or failure. Historically, long-distance trading societies, as a more particular example of trading societies, have been established in various parts of the world. Many of these societies were successful in certain aspects, and interestingly, a number of these societies were successful, despite having different characteristics and institutions. This thesis aims to identify some of the institutions and characteristics that contributed to their success. We use an agent-based simulation model to identify the key characteristics that impacted the success of two long-distance trading societies. The objective of this study is to develop suitable models of these societies that can be used for systematic studies of their characteristics. Two long-distance trading societies studied in this thesis are the British East India Company and the Armenian traders of New-Julfa, both of which flourished during the 17th and 18th centuries. In this thesis, we conduct a comparative study of the two societies, based on historical evidence and contemporary literature that provide empirical support. Based on the comparative study, we have identified three overarching themes that distinguish these societies: 1) the contractual schemes used and their environmental characteristics; 2) the apprenticeship programmes and vocational schools used; and 3) the institutional mechanisms used. This thesis presents three models developed corresponding to the three themes, based on the presented comparative study of the societies. The first model developed was based on game theory and simulates the impact of contractual schemes and environmental circumstances on the two trading societies’ success. In the contractual scheme, we assessed the impact of payment schemes (e.g. profit-sharing), penalties (e.g. dismissal for wrongdoing), and hiring and firing schemes, in the context of open and closed societies, on the success of the society. As a metric of societal success, we considered rule conformity, sustainability, profitability, and improving societal skills. As a next step, we model the impact of apprenticeship programmes and vocational schools on the societal success (i.e. maintaining societal skill levels, programme completion rate, and increasing societal income). In this model, we consider the impact of different trader and trainer types (e.g. artisans), along with various policies for managing recruits (e.g. training societal members and hiring from other societies). In our last model, we extend the beliefs-desires-intentions (BDI) mental architecture to model the impact of institutional mechanisms (e.g. fairness) on the success of these societies. Beyond creating models (methodological contribution) of these societies to demonstrate the influence of characteristics on the success of the societies, our work provides practical contributions, such as a) insights into the positive impact of profit-sharing on a company’s profitability; b) the aspects that should be considered to have successful apprenticeship programmes; and c) insights into designing societal rules to deal with societies’ lack of transparency

    Tätigkeitsbericht 2017-2019/20

    Get PDF
    • …
    corecore