11 research outputs found

    Inferring Attack Relations for Gradual Semantics

    Get PDF
    Peer reviewedPublisher PD

    Reliable statistical modeling of weakly structured information

    Get PDF
    The statistical analysis of "real-world" data is often confronted with the fact that most standard statistical methods were developed under some kind of idealization of the data that is often not adequate in practical situations. This concerns among others i) the potentially deficient quality of the data that can arise for example due to measurement error, non-response in surveys or data processing errors and ii) the scale quality of the data, that is idealized as "the data have some clear scale of measurement that can be uniquely located within the scale hierarchy of Stevens (or that of Narens and Luce or Orth)". Modern statistical methods like, e.g., correction techniques for measurement error or robust methods cope with issue i). In the context of missing or coarsened data, imputation techniques and methods that explicitly model the missing/coarsening process are nowadays wellestablished tools of refined data analysis. Concerning ii) the typical statistical viewpoint is a more pragmatical one, in case of doubt one simply presumes the strongest scale of measurement that is clearly "justified". In more complex situations, like for example in the context of the analysis of ranking data, statisticians often simply do not worry about purely measurement theoretic reservations too much, but instead embed the data structure in an appropriate, easy to handle space, like e.g. a metric space and then use all statistical tools available for this space. Against this background, the present cumulative dissertation tries to contribute from different perspectives to the appropriate handling of data that challenge the above-mentioned idealizations. A focus here is on the one hand on analysis of interval-valued and set-valued data within the methodology of partial identification, and on the other hand on the analysis of data with values in a partially ordered set (poset-valued data). Further tools of statistical modeling treated in the dissertation are necessity measures in the context of possibility theory and concepts of stochastic dominance for poset-valued data. The present dissertation consists of 8 contributions, which will be detailedly discussed in the following sections: Contribution 1 analyzes different identification regions for partially identified linear models under interval-valued responses and develops a further kind of identification region (as well as a corresponding estimator). Estimates for the identifcation regions are compared to each other and also to classical statistical approaches for a data set on wine quality. Contribution 2 deals with logistic regression under coarsened responses, analyzes point-identifying assumptions and develops likelihood-based estimators for the identified set. The methods are illustrated with data of a wave of the panel study "Labor Market and Social Security" (PASS). Contribution 3 analyzes the combinatorial structure of the extreme points and the edges of a polytope (called credal set or core in the literature) that plays a crucial role in imprecise probability theory. Furthermore, an efficient algorithm for enumerating all extreme points is given and compared to existing standard methods. Contribution 4 develops a quantile concept for data or random variables with values in a complete lattice, which is applied in Contribution 5 to the case of ranking data in the context of a data set on the wisdom of the crowd phenomena. In Contribution 6 a framework for evaluating the quality of different aggregation functions of Social Choice Theory is developed, which enables analysis of quality in dependence of group specific homogeneity. In a simulation study, selected aggregation functions, including an aggregation function based on the concepts of Contribution 4 and Contribution 5, are analyzed. Contribution 7 supplies a linear program that allows for detecting stochastic dominance for poset-valued random variables, gives proposals for inference and regularization, and generalizes the approach to the general task of optimizing a linear function on a closure system. The generality of the developed methods is illustrated with data examples in the context of multivariate inequality analysis, item impact and differential item functioning in the context of item response theory, analyzing distributional differences in spatial statistics and guided regularization in the context of cognitive diagnosis models. Contribution 8 uses concepts of stochastic dominance to establish a descriptive approach for a relational analysis of person ability and item difficulty in the context of multidimensional item response theory. All developed methods have been implemented in the language R ([R Development Core Team, 2014]) and are available from the author upon request. The application examples corroborate the usefulness of weak types of statistical modeling examined in this thesis, which, beyond their flexibility to deal with many kinds of data deficiency, can still lead to informative substance matter conclusions that are then more reliable due to the weak modeling.Die statistische Analyse real erhobener Daten sieht sich oft mit der Tatsache konfrontiert, dass übliche statistische Standardmethoden unter einer starken Idealisierung der Datensituation entwickelt wurden, die in der Praxis jedoch oft nicht angemessen ist. Dies betrifft i) die möglicherweise defizitäre Qualität der Daten, die beispielsweise durch Vorhandensein von Messfehlern, durch systematischen Antwortausfall im Kontext sozialwissenschaftlicher Erhebungen oder auch durch Fehler während der Datenverarbeitung bedingt ist und ii) die Skalenqualität der Daten an sich: Viele Datensituationen lassen sich nicht in die einfachen Skalenhierarchien von Stevens (oder die von Narens und Luce oder Orth) einordnen. Modernere statistische Verfahren wie beispielsweise Messfehlerkorrekturverfahren oder robuste Methoden versuchen, der Idealisierung der Datenqualität im Nachhinein Rechnung zu tragen. Im Zusammenhang mit fehlenden bzw. intervallzensierten Daten haben sich Imputationsverfahren zur Vervollständigung fehlender Werte bzw. Verfahren, die den Entstehungprozess der vergröberten Daten explizit modellieren, durchgesetzt. In Bezug auf die Skalenqualität geht die Statistik meist eher pragmatisch vor, im Zweifelsfall wird das niedrigste Skalenniveau gewählt, das klar gerechtfertigt ist. In komplexeren multivariaten Situationen, wie beispielsweise der Analyse von Ranking-Daten, die kaum noch in das Stevensche "Korsett" gezwungen werden können, bedient man sich oft der einfachen Idee der Einbettung der Daten in einen geeigneten metrischen Raum, um dann anschließend alle Werkzeuge metrischer Modellierung nutzen zu können. Vor diesem Hintergrund hat die hier vorgelegte kumulative Dissertation deshalb zum Ziel, aus verschiedenen Blickwinkeln Beiträge zum adäquaten Umgang mit Daten, die jene Idealisierungen herausfordern, zu leisten. Dabei steht hier vor allem die Analyse intervallwertiger bzw. mengenwertiger Daten mittels partieller Identifikation auf der Seite defzitärer Datenqualität im Vordergrund, während bezüglich Skalenqualität der Fall von verbandswertigen Daten behandelt wird. Als weitere Werkzeuge statistischer Modellierung werden hier insbesondere Necessity-Maße im Rahmen der Imprecise Probabilities und Konzepte stochastischer Dominanz für Zufallsvariablen mit Werten in einer partiell geordneten Menge betrachtet. Die vorliegende Dissertation umfasst 8 Beiträge, die in den folgenden Kapiteln näher diskutiert werden: Beitrag 1 analysiert verschiedene Identifikationsregionen für partiell identifizierte lineare Modelle unter intervallwertig beobachteter Responsevariable und schlägt eine neue Identifikationsregion (inklusive Schätzer) vor. Für einen Datensatz, der die Qualität von verschiedenen Rotweinen, gegeben durch ExpertInnenurteile, in Abhängigkeit von verschiedenen physikochemischen Eigenschaften beschreibt, werden Schätzungen für die Identifikationsregionen analysiert. Die Ergebnisse werden ebenfalls mit den Ergebissen klassischer Methoden für Intervalldaten verglichen. Beitrag 2 behandelt logistische Regression unter vergröberter Responsevariable, analysiert punktidentifizierende Annahmen und entwickelt likelihoodbasierte Schätzer für die entsprechenden Identifikationsregionen. Die Methode wird mit Daten einer Welle der Panelstudie "Arbeitsmarkt und Soziale Sicherung" (PASS) illustriert. Beitrag 3 analysiert die kombinatorische Struktur der Extrempunkte und der Kanten eines Polytops (sogenannte Struktur bzw. Kern einer Intervallwahrscheinlichkeit bzw. einer nicht-additiven Mengenfunktion), das von wesentlicher Bedeutung in vielen Gebieten der Imprecise Probability Theory ist. Ein effizienter Algorithmus zur Enumeration aller Extrempunkte wird ebenfalls gegeben und mit existierenden Standardenumerationsmethoden verglichen. In Beitrag 4 wird ein Quantilkonzept für verbandswertige Daten bzw. Zufallsvariablen vorgestellt. Dieses Quantilkonzept wird in Beitrag 5 auf Ranking-Daten im Zusammenhang mit einem Datensatz, der das "Weisheit der Vielen"-Phänomen untersucht, angewendet. Beitrag 6 entwickelt eine Methode zur probabilistischen Analyse der "Qualität" verschiedener Aggregationsfunktionen der Social Choice Theory. Die Analyse wird hier in Abhäangigkeit der Homogenität der betrachteten Gruppen durchgeführt. In einer simulationsbasierten Studie werden exemplarisch verschiedene klassische Aggregationsfunktionen, sowie eine neue Aggregationsfunktion basierend auf den Beiträgen 4 und 5, verglichen. Beitrag 7 stellt einen Ansatz vor, um das Vorliegen stochastischer Dominanz zwischen zwei Zufallsvariablen zu überprüfen. Der Anstaz nutzt Techniken linearer Programmierung. Weiterhin werden Vorschläge für statistische Inferenz und Regularisierung gemacht. Die Methode wird anschließend auch auf den allgemeineren Fall des Optimierens einer linearen Funktion auf einem Hüllensystem ausgeweitet. Die flexible Anwendbarkeit wird durch verschiedene Anwendungsbeispiele illustriert. Beitrag 8 nutzt Ideen stochastischer Dominanz, um Datensätze der multidimensionalen Item Response Theory relational zu analysieren, indem Paare von sich gegenseitig empirisch stützenden Fähigkeitsrelationen der Personen und Schwierigkeitsrelationen der Aufgaben entwickelt werden. Alle entwickelten Methoden wurden in R ([R Development Core Team, 2014]) implementiert. Die Anwendungsbeispiele zeigen die Flexibilität der hier betrachteten Methoden relationaler bzw. "schwacher" Modellierung insbesondere zur Behandlung defizitärer Daten und unterstreichen die Tatsache, dass auch mit Methoden schwacher Modellierung oft immer noch nichttriviale substanzwissenschaftliche Rückschlüsse möglich sind, die aufgrund der inhaltlich vorsichtigeren Modellierung dann auch sehr viel stärker belastbar sind

    Formal Methods of Argumentation as Models of Engineering Design Decisions and Processes

    Get PDF
    Complex engineering projects comprise many individual design decisions. As these decisions are made over the course of months, even years, and across different teams of engineers, it is common for them to be based on different, possibly conflicting assumptions. The longer these inconsistencies go undetected, the costlier they are to resolve. Therefore it is important to spot them as early as possible. There is currently no software aimed explicitly at detecting inconsistencies in interrelated design decisions. This thesis is a step towards the development of such tools. We use formal methods of argumentation, a branch of artificial intelligence, as the foundation of a logical model of design decisions capable of handling inconsistency. It has three parts. First, argumentation is used to model the pros and cons of individual decisions and to reason about the possible worlds in which these arguments are justified. In the second part we study sequences of interrelated decisions. We identify cases where the arguments in one decision invalidate the justification for another decision, and develop a measure of the impact that choosing a specific option has on the consistency of the overall design. The final part of the thesis is concerned with non-deductive arguments, which are used in design debates, for example to draw analogies between past and current problems. Our model integrates deductive and non-deductive arguments side-by-side. This work is supported by our collaboration with the engineering department of Queen’s University Belfast and an industrial partner. The thesis contains two case studies of realistic problems and parts of it were implemented as software prototypes. We also give theoretical results demonstrating the internal consistency of our model

    Remote Sensing and Geosciences for Archaeology

    Get PDF
    This book collects more than 20 papers, written by renowned experts and scientists from across the globe, that showcase the state-of-the-art and forefront research in archaeological remote sensing and the use of geoscientific techniques to investigate archaeological records and cultural heritage. Very high resolution satellite images from optical and radar space-borne sensors, airborne multi-spectral images, ground penetrating radar, terrestrial laser scanning, 3D modelling, Geographyc Information Systems (GIS) are among the techniques used in the archaeological studies published in this book. The reader can learn how to use these instruments and sensors, also in combination, to investigate cultural landscapes, discover new sites, reconstruct paleo-landscapes, augment the knowledge of monuments, and assess the condition of heritage at risk. Case studies scattered across Europe, Asia and America are presented: from the World UNESCO World Heritage Site of Lines and Geoglyphs of Nasca and Palpa to heritage under threat in the Middle East and North Africa, from coastal heritage in the intertidal flats of the German North Sea to Early and Neolithic settlements in Thessaly. Beginners will learn robust research methodologies and take inspiration; mature scholars will for sure derive inputs for new research and applications

    3D Recording and Interpretation for Maritime Archaeology

    Get PDF
    This open access peer-reviewed volume was inspired by the UNESCO UNITWIN Network for Underwater Archaeology International Workshop held at Flinders University, Adelaide, Australia in November 2016. Content is based on, but not limited to, the work presented at the workshop which was dedicated to 3D recording and interpretation for maritime archaeology. The volume consists of contributions from leading international experts as well as up-and-coming early career researchers from around the globe. The content of the book includes recording and analysis of maritime archaeology through emerging technologies, including both practical and theoretical contributions. Topics include photogrammetric recording, laser scanning, marine geophysical 3D survey techniques, virtual reality, 3D modelling and reconstruction, data integration and Geographic Information Systems. The principal incentive for this publication is the ongoing rapid shift in the methodologies of maritime archaeology within recent years and a marked increase in the use of 3D and digital approaches. This convergence of digital technologies such as underwater photography and photogrammetry, 3D sonar, 3D virtual reality, and 3D printing has highlighted a pressing need for these new methodologies to be considered together, both in terms of defining the state-of-the-art and for consideration of future directions. As a scholarly publication, the audience for the book includes students and researchers, as well as professionals working in various aspects of archaeology, heritage management, education, museums, and public policy. It will be of special interest to those working in the field of coastal cultural resource management and underwater archaeology but will also be of broader interest to anyone interested in archaeology and to those in other disciplines who are now engaging with 3D recording and visualization

    Building bridges for better machines : from machine ethics to machine explainability and back

    Get PDF
    Be it nursing robots in Japan, self-driving buses in Germany or automated hiring systems in the USA, complex artificial computing systems have become an indispensable part of our everyday lives. Two major challenges arise from this development: machine ethics and machine explainability. Machine ethics deals with behavioral constraints on systems to ensure restricted, morally acceptable behavior; machine explainability affords the means to satisfactorily explain the actions and decisions of systems so that human users can understand these systems and, thus, be assured of their socially beneficial effects. Machine ethics and explainability prove to be particularly efficient only in symbiosis. In this context, this thesis will demonstrate how machine ethics requires machine explainability and how machine explainability includes machine ethics. We develop these two facets using examples from the scenarios above. Based on these examples, we argue for a specific view of machine ethics and suggest how it can be formalized in a theoretical framework. In terms of machine explainability, we will outline how our proposed framework, by using an argumentation-based approach for decision making, can provide a foundation for machine explanations. Beyond the framework, we will also clarify the notion of machine explainability as a research area, charting its diverse and often confusing literature. To this end, we will outline what, exactly, machine explainability research aims to accomplish. Finally, we will use all these considerations as a starting point for developing evaluation criteria for good explanations, such as comprehensibility, assessability, and fidelity. Evaluating our framework using these criteria shows that it is a promising approach and augurs to outperform many other explainability approaches that have been developed so far.DFG: CRC 248: Center for Perspicuous Computing; VolkswagenStiftung: Explainable Intelligent System

    Actes des 22èmes rencontres francophones sur la Logique Floue et ses Applications, 10-11 octobre 2013, Reims, France

    Get PDF

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well
    corecore