150 research outputs found

    Exploiting Deep Semantics and Compositionality of Natural Language for Human-Robot-Interaction

    Full text link
    We develop a natural language interface for human robot interaction that implements reasoning about deep semantics in natural language. To realize the required deep analysis, we employ methods from cognitive linguistics, namely the modular and compositional framework of Embodied Construction Grammar (ECG) [Feldman, 2009]. Using ECG, robots are able to solve fine-grained reference resolution problems and other issues related to deep semantics and compositionality of natural language. This also includes verbal interaction with humans to clarify commands and queries that are too ambiguous to be executed safely. We implement our NLU framework as a ROS package and present proof-of-concept scenarios with different robots, as well as a survey on the state of the art

    I believe it's possible it might be so.... : Exploiting Lexical Clues for the Automatic Generation of Evidentiality Weights for Information Extracted from English Text

    Get PDF
    Information formulated in natural language is being created at an incredible pace, far more quickly than we can make sense of it. Thus, computer algorithms for various kinds of text analytics have been developed to try to find nuggets of new, pertinent and useful information. However, information extracted from text is not always credible or reliable; often buried in sentences are lexical and grammatical structures that indicate the uncertainty of the proposition. Such clues include hedges such as modal adverbs and adjectives, as well as hearsay markers, indicators of inference or belief (”mindsay”), and verb forms identifying future actions which may not take place. In this thesis, we demonstrate how analysis of these lexical and grammatical forms of uncertainty can be automatically analyzed to provide a method of determining an evidential weight to the proposition, which can be used to assess the credibility of the information extracted from English text

    Quantitative Framework For Social Cultural Interactions

    Get PDF
    For an autonomous robot or software agent to participate in the social life of humans, it must have a way to perform a calculus of social behavior. Such a calculus must have explanatory power (it must provide a coherent theory for why the humans act the way they do), and predictive power (it must provide some plausible events from the predicted future actions of the humans). This dissertation describes a series of contributions that would allow agents observing or interacting with humans to perform a calculus of social behavior taking into account cultural conventions and socially acceptable behavior models. We discuss the formal components of the model: culture-sanctioned social metrics (CSSMs), concrete beliefs (CBs) and action impact functions. Through a detailed case study of a crooked seller who relies on the manipulation of public perception, we show that the model explains how the exploitation of social conventions allows the seller to finalize transactions, despite the fact that the clients know that they are being cheated. In a separate study, we show that how the crooked seller can find an optimal strategy with the use of reinforcement learning. We extend the CSSM model for modeling the propagation of public perception across multiple social interactions. We model the evolution of the public perception both over a single interaction and during a series of interactions over an extended period of time. An important aspect for modeling the public perception is its propagation - how the propagation is affected by the spatio-temporal context of the interaction and how does the short-term and long-term memory of humans affect the overall public perception. We validated the CSSM model through a user study in which participants cognizant with the modeled culture had to evaluate the impact on the social values. The scenarios used in the experiments modeled emotionally charged social situations in a cross-cultural setting and with the presence of a robot. The scenarios model conflicts of cross-cultural communication as well as ethical, social and financial choices. This study allowed us to study whether people sharing the same culture evaluate CSSMs at the same way (the inter-cultural uniformity conjecture). By presenting a wide range of possible metrics, the study also allowed us to determine whether any given metric can be considered a CSSM in a given culture or not

    Advances and Applications of Dezert-Smarandache Theory (DSmT) for Information Fusion (Collected Works), Vol. 4

    Get PDF
    The fourth volume on Advances and Applications of Dezert-Smarandache Theory (DSmT) for information fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics. The contributions (see List of Articles published in this book, at the end of the volume) have been published or presented after disseminating the third volume (2009, http://fs.unm.edu/DSmT-book3.pdf) in international conferences, seminars, workshops and journals. First Part of this book presents the theoretical advancement of DSmT, dealing with Belief functions, conditioning and deconditioning, Analytic Hierarchy Process, Decision Making, Multi-Criteria, evidence theory, combination rule, evidence distance, conflicting belief, sources of evidences with different importance and reliabilities, importance of sources, pignistic probability transformation, Qualitative reasoning under uncertainty, Imprecise belief structures, 2-Tuple linguistic label, Electre Tri Method, hierarchical proportional redistribution, basic belief assignment, subjective probability measure, Smarandache codification, neutrosophic logic, Evidence theory, outranking methods, Dempster-Shafer Theory, Bayes fusion rule, frequentist probability, mean square error, controlling factor, optimal assignment solution, data association, Transferable Belief Model, and others. More applications of DSmT have emerged in the past years since the apparition of the third book of DSmT 2009. Subsequently, the second part of this volume is about applications of DSmT in correlation with Electronic Support Measures, belief function, sensor networks, Ground Moving Target and Multiple target tracking, Vehicle-Born Improvised Explosive Device, Belief Interacting Multiple Model filter, seismic and acoustic sensor, Support Vector Machines, Alarm classification, ability of human visual system, Uncertainty Representation and Reasoning Evaluation Framework, Threat Assessment, Handwritten Signature Verification, Automatic Aircraft Recognition, Dynamic Data-Driven Application System, adjustment of secure communication trust analysis, and so on. Finally, the third part presents a List of References related with DSmT published or presented along the years since its inception in 2004, chronologically ordered

    Distributed Detection and Fusion in Parallel Sensor Architectures

    Get PDF
    Parallel distributed detection system consists of several separate sensor-detector nodes (separated spatially or by their principles of operation), each with some processing capabilities. These local sensor-detectors send some information on an observed phenomenon to a centrally located Data Fusion Center for aggregation and decision making. Often, the local sensors use electro-mechanical, optical or RF modalities and are known as ``hard'' sensors. For such data sources, the sensor observations have structure and often some tractable statistical distributions which help in weighing their contribution to an integrated global decision. In a distributed detection environment, we often also have ``humans in the loop.''. Humans provide their subjective opinions on these phenomena. These opinions are labeled ``soft'' data. It is of interest to integrate "soft'' decisions, mostly assessments provided by humans, with data from the "hard" sensors, in order to improve global decision reliability. Several techniques were developed to combine data from traditional hard sensors, and a body of work was also created about integration of "soft'' data. However relatively little work was done on combining hard and soft data and decisions in an integrated environment. Our work investigates both "hard'' and "hard/soft'' fusion schemes, and proposes data integration architectures to facilitate heterogeneous sensor data fusion. In the context of "hard'' fusion, one of the contributions of this thesis is an algorithm that provides a globally optimum solution for local detector (hard sensor) design that satisfies a Neyman-Pearson criterion (maximal probability of detection under a fixed upper bound on the global false alarm rate) at the fusion center. Furthermore, the thesis also delves into application of distributed detection techniques in both parallel and sequential frameworks. Specifically, we apply parallel detection and fusion schemes to the problem of real time computer user authentication and sequential Kalman filtering for real time hypoxia detection. In the context of "hard/soft'' fusion, we propose a new Dempster-Shafer evidence theory based approach to facilitate heterogeneous sensor data fusion. Application of the framework to a number of simulated example scenarios showcases the wide range of applicability of the developed approach. We also propose and develop a hierarchical evidence tree based architecture for representing nested human opinions. The proposed framework is versatile enough to deal with both hard and soft source data using the evidence theory framework, it can handle uncertainty as well as data aggregation.Ph.D., Electrical Engineering -- Drexel University, 201

    Rationality in Econometrics

    Get PDF
    The idea of rationality enters an econometrician's work in many ways; e.g., in his presuppositions about sample populations, in his model selections and data analyses, and in his choice of projects. I shall consider some of these ways and their ramifications for the econometrician's own life and for the development of econometrics. I begin with a discussion of rationality that I have found in the writings of Aristotle and other leading philosophers. My aim here is to establish the characteristics that we in good faith can expect rational members of a sample population to possess. The characteristics with which I end up have no definite meaning. Instead they are like undefined terms in mathematics that an econometrician can interpret in ways that suit the purposes of his research and seem appropriate for the population he is studying. When interpreted, the pertinent characteristics of the rational members of a given population become hypotheses whose empirical relevance must be tested. In rationally designed econometric studies the interpretation of 'rationality' that seems appropriate for a given study is usually an interpretation that the pertinent econometrician extracts from various economic theories. I look at some of these interpretations and discuss their empirical relevance. The interpretations of particular interest concern consumer choice under certainty, choice under risky and uncertain conditions, and choice in game-theoretic situations. These interpretations appear in various representations in the ways econometricians model rationality. I single out for discussion microeconometric models of consumer choice and macroeconometric rational expectations models. In the last section of the paper I consider two lacunas in Kuhn's and Lakatos' theories, and see how econometricians go about solving puzzles and extending positive heuristics. I begin by discussing the considerations that guide an econometrician in his choice of research projects. Then, I argue about the determinants of rational choice in model selection. Finally, I consider the politics of writing research reports. The contents of these sections concern aspects of an econometrician's rational choice that are relevant for the orderly development of econometrics.

    Theoretical and Applied Foundations for Intrusion Detection in Single and Federated Clouds

    Get PDF
    Les systèmes infonuagiques deviennent de plus en plus complexes, plus dynamiques et hétérogènes. Un tel environnement produit souvent des données complexes et bruitées, empêchant les systèmes de détection d’intrusion (IDS) de détecter des variantes d’attaques connues. Une seule intrusion ou une attaque dans un tel système hétérogène peut se présenter sous des formes différentes, logiquement mais non synthétiquement similaires. Les IDS traditionnels sont incapables d’identifier ces attaques, car ils sont conçus pour des infrastructures spécifiques et limitées. Par conséquent, une détection précise dans le nuage ne sera absolument pas identifiée. Outre le problème de l’infonuagique, les cyber-attaques sont de plus en plus sophistiquées et difficiles à détecter. Il est donc extrêmement compliqué pour un unique IDS d’un nuage de détecter toutes les attaques, en raison de leurs implications, et leurs connaissances limitées et insuffisantes de celles-ci. Les solutions IDS actuelles de l’infonuagique résident dans le fait qu’elles ne tiennent pas compte des aspects dynamiques et hétérogènes de l’infonuagique. En outre, elles s’appuient fondamentalement sur les connaissances et l’expérience locales pour identifier les attaques et les modèles existants. Cela rend le nuage vulnérable aux attaques «Zero-Day». À cette fin, nous résolvons dans cette thèse deux défis associés à l’IDS de l’infonuagique : la détection des cyberattaques dans des environnements complexes, dynamiques et hétérogènes, et la détection des cyberattaques ayant des informations limitées et/ou incomplètes sur les intrusions et leurs conséquences. Dans cette thèse, nous sommes intéressés aux IDS génériques de l’infonuagique afin d’identifier les intrusions qui sont indépendantes de l’infrastructure utilisée. Par conséquent, à chaque fois qu’un pressentiment d’attaque est identifié, le système de détection d’intrusion doit être capable de reconnaître toutes les variantes d’une telle attaque, quelle que soit l’infrastructure utilisée. De plus, les IDS de l’infonuagique coopèrent et échangent des informations afin de faire bénéficier chacun des expertises des autres, pour identifier des modèles d’attaques inconnues.----------ABSTRACT: Cloud Computing systems are becoming more and more complex, dynamic and heterogeneous. Such an environment frequently produces complex and noisy data that make Intrusion Detection Systems (IDSs) unable to detect unknown variants of known attacks. A single intrusion or an attack in such a heterogeneous system could take various forms that are logically but not synthetically similar. This, in turn, makes traditional IDSs unable to identify these attacks, since they are designed for specific and limited infrastructures. Therefore, the accuracy of the detection in the cloud will be very negatively affected. In addition to the problem of the cloud computing environment, cyber attacks are getting more sophisticated and harder to detect. Thus, it is becoming increasingly difficult for a single cloud-based IDS to detect all attacks, because of limited and incomplete knowledge about attacks and implications. The problem of the existing cloud-based IDS solutions is that they overlook the dynamic and changing nature of the cloud. Moreover, they are fundamentally based on the local knowledge and experience to perform the classification of attacks and normal patterns. This renders the cloud vulnerable to “Zero-Day” attacks. To this end, we address throughout this thesis two challenges associated with the cloud-based IDS which are: the detection of cyber attacks under complex, dynamic and heterogeneous environments; and the detection of cyber attacks under limited and/or incomplete information about intrusions and implications. We are interested in this thesis in allowing cloud-based IDSs to be generic, in order to identify intrusions regardless of the infrastructure used. Therefore, whenever an intrusion has been identified, an IDS should be able to recognize all the different structures of such an attack, regardless of the infrastructure that is being used. Moreover, we are interested in allowing cloud-based IDSs to cooperate and share knowledge with each other, in order to make them benefit from each other’s expertise to cover unknown attack patterns. The originality of this thesis lies within two aspects: 1) the design of a generic cloud-based IDS that allows the detection under changing and heterogeneous environments and 2) the design of a multi-cloud cooperative IDS that ensures trustworthiness, fairness and sustainability. By trustworthiness, we mean that the cloud-based IDS should be able to ensure that it will consult, cooperate and share knowledge with trusted parties (i.e., cloud-based IDSs). By fairness, we mean that the cloud-based IDS should be able to guarantee that mutual benefits will be achieved through minimising the chance of cooperating with selfish IDSs. This is useful to give IDSs the motivation to participate in the community
    • …
    corecore