5,311 research outputs found

    Empirical Evaluation of Abstract Argumentation: Supporting the Need for Bipolar and Probabilistic Approaches

    Get PDF
    In dialogical argumentation it is often assumed that the involved parties always correctly identify the intended statements posited by each other, realize all of the associated relations, conform to the three acceptability states (accepted, rejected, undecided), adjust their views when new and correct information comes in, and that a framework handling only attack relations is sufficient to represent their opinions. Although it is natural to make these assumptions as a starting point for further research, removing them or even acknowledging that such removal should happen is more challenging for some of these concepts than for others. Probabilistic argumentation is one of the approaches that can be harnessed for more accurate user modelling. The epistemic approach allows us to represent how much a given argument is believed by a given person, offering us the possibility to express more than just three agreement states. It is equipped with a wide range of postulates, including those that do not make any restrictions concerning how initial arguments should be viewed, thus potentially being more adequate for handling beliefs of the people that have not fully disclosed their opinions in comparison to Dung's semantics. The constellation approach can be used to represent the views of different people concerning the structure of the framework we are dealing with, including cases in which not all relations are acknowledged or when they are seen differently than intended. Finally, bipolar argumentation frameworks can be used to express both positive and negative relations between arguments. In this paper we describe the results of an experiment in which participants judged dialogues in terms of agreement and structure. We compare our findings with the aforementioned assumptions as well as with the constellation and epistemic approaches to probabilistic argumentation and bipolar argumentation

    The World-Trade Web: Topological Properties, Dynamics, and Evolution

    Get PDF
    This paper studies the statistical properties of the web of import-export relationships among world countries using a weighted-network approach. We analyze how the distributions of the most important network statistics measuring connectivity, assortativity, clustering and centrality have co-evolved over time. We show that all node-statistic distributions and their correlation structure have remained surprisingly stable in the last 20 years -- and are likely to do so in the future. Conversely, the distribution of (positive) link weights is slowly moving from a log-normal density towards a power law. We also characterize the autoregressive properties of network-statistics dynamics. We find that network-statistics growth rates are well-proxied by fat-tailed densities like the Laplace or the asymmetric exponential-power. Finally, we find that all our results are reasonably robust to a few alternative, economically-meaningful, weighting schemes.Comment: 44 pages, 39 eps figure

    Residual Correlation in Graph Neural Network Regression

    Full text link
    A graph neural network transforms features in each vertex's neighborhood into a vector representation of the vertex. Afterward, each vertex's representation is used independently for predicting its label. This standard pipeline implicitly assumes that vertex labels are conditionally independent given their neighborhood features. However, this is a strong assumption, and we show that it is far from true on many real-world graph datasets. Focusing on regression tasks, we find that this conditional independence assumption severely limits predictive power. This should not be that surprising, given that traditional graph-based semi-supervised learning methods such as label propagation work in the opposite fashion by explicitly modeling the correlation in predicted outcomes. Here, we address this problem with an interpretable and efficient framework that can improve any graph neural network architecture simply by exploiting correlation structure in the regression residuals. In particular, we model the joint distribution of residuals on vertices with a parameterized multivariate Gaussian, and estimate the parameters by maximizing the marginal likelihood of the observed labels. Our framework achieves substantially higher accuracy than competing baselines, and the learned parameters can be interpreted as the strength of correlation among connected vertices. Furthermore, we develop linear time algorithms for low-variance, unbiased model parameter estimates, allowing us to scale to large networks. We also provide a basic version of our method that makes stronger assumptions on correlation structure but is painless to implement, often leading to great practical performance with minimal overhead

    On-shell methods for off-shell quantities in N = 4 Super Yang-Mills: From scattering amplitudes to form factors and the dilatation operator

    Get PDF
    PhDPlanar maximally supersymmetric Yang-Mills theory (N = 4 SYM) is a special quantum fi eld theory. A few of its remarkable features are conformal symmetry at the quantum level, evidence of integrability and, moreover, it is a prime example of the AdS/CFT duality. Triggered by Witten's twistor string theory [1], the past 15 years have witnessed enormous progress in reformulating this theory to make as many of these special features manifest, from the choice of convenient variables to recursion relations that allowed new mathematical structures to appear, like the Grassmannian [2]. These methods are collectively referred to as on-shell methods. The ultimate hope is that, by understanding N = 4 SYM in depth, one can learn about other, more realistic quantum fi eld theories. The overarching theme of this thesis is the investigation of how on-shell methods can aid the computation of quantities other than scattering amplitudes. In this spirit we study form factors and correlation functions, said to be partially and completely off-shell quantities, respectively. More explicitly, we compute form factors of half-BPS operators up to two loops, and study the dilatation operator in the SO(6) and SU(2j3) sectors using techniques originally designed for amplitudes. A second part of the work is dedicated to the study of scattering amplitudes beyond the planar limit, an area of research which is still in its infancy, and not much is known about which special features of the planar theory survive in the non-planar regime. In this context, we generalise some aspects of the on-shell diagram formulation of Arkani-Hamed et al. [3] to take into account non-planar corrections

    Essays on the Network Analysis of Culture

    Get PDF
    Nelle relazioni economiche, negli accordi internazionali e nel dialogo istituzionale, la parola distanza \ue8 una delle pi\uf9 enunciate. Ci sono distanze esogene da colmare per creare legami, a volte ci sono chiusure necessarie e altre volte rotture inevitabili, ma questo pu\uf2 dipendere, cos\uec come le distanze geografiche e fisiche, e gli interessi impliciti, in gran parte dallo status culturale di gruppi di individui. La valutazione quantitativa della distanza tra due entit\ue0 \ue8 una propriet\ue0 diadica ed in quanto tale, la presenza, intensit\ue0, direzione e segno di un legame rappresenta un modo per catturarla. Poich\ue9 le entit\ue0 possono essere individui, oggetti, societ\ue0, paesi, pianeti, cos\uec come reti che si riferiscono a contesti specifici, e il modo di misurare la somiglianza tra di loro pu\uf2 essere vario, una cosa peculiare delle distanze \ue8 la loro natura mutevole. Mentre le distanze fisiche sono quasi oggettivamente calcolabili, nel caso della cultura (ed anche di altri concetti pi\uf9 o meno ampi) l\u2019utilizzo di un metodo rispetto ad un altro potrebbe cambiare radicalmente la relazione di distanza tra le entit\ue0, soprattutto se esse hanno un alto grado di complessit\ue0. Il bagaglio culturale svolge un ruolo importante nel determinare lo status socio-economico di un paese e la sua caratterizzazione in termini di somiglianza con altri paesi. Il Capitolo 1 - utilizzando i dati della WVS/EVS Joint 2017 - operativizza una definizione di cultura che tiene conto delle interdipendenze tra tratti culturali a livello di paese e propone una nuova misura di distanza culturale. Sfruttando un recente algoritmo Bayesiano di Copula Gaussian graphical models, questo Capitolo stima per ciascuno di 76 paesi inclusi nella WVS/EVS Joint 2017, la rete culturale di interdipendenze tra tratti culturali considerando diversi insiemi di essi: i 6 della prima batteria di domande, i 10 della mappa culturale di Inglehart-Welzel, i 14 della mappa culturale di Inglehart-Welzel, dove per gli indici di \u201cPost-materialism\u201d e \u201cAutonomy\u201d sono state utilizzate le variabili da cui sono ricavate, e 60 tratti culturali dei quali, 14 come definiti in precedenza, 6 fanno riferimento alla prima batteria di domande e i restanti 40 sono selezionati in modo da ottenere un numero di variabili che possa far fronte al trade-off tra il tempo di elaborazione dell\u2019algoritmo e il minimo numero di valori mancanti per paese. Dopo aver definito le distanze tra i paesi considerando sia le reti culturali che le distribuzioni dei tratti culturali, attraverso il metodo DISTATIS, questo Capitolo osserva come l'aggiunta della componente di rete a quella distributiva classica, modifichi sostanzialmente la misura della distanza culturale sia nel caso di pochi tratti culturali (6, 10 e 14) che nel caso di pi\uf9 tratti culturali (60). Infine, esso afferma che la struttura di rete della cultura nazionale \ue8 importante per la definizione della distanza culturale tra i paesi del mondo e trova due misure finali di distanza: il Compromise_Large (da 60 variabili) e il Compromise_IW (dalle variabili della mappa culturale di Inglehart-Welzel). L'effetto delle variabili culturali sulla situazione economica di un paese, o pi\uf9 in generale di un'area geograficamente definita, \ue8 stato negli ultimi anni scandagliato dalla letteratura economica. Le distanze culturali, genetiche, geografiche, climatiche, semantiche, etniche, linguistiche, politiche sono state spesso incluse nei modelli econometrici come variabili indipendenti o di controllo. Il Capitolo 2 segue questa letteratura, prima confrontando individualmente tre misurazioni della distanza culturale calcolate nel Capitolo 1 con altre distanze usate in letteratura assieme alla distanza culturale o come proxy di essa, e poi confrontandole (le misure di distanza culturale e quelle dalla letteratura) congiuntamente tramite DISTATIS. Le tre distanze culturali sono le due nuove misure di cui sopra (Compromise_Large e Compromise_IW) e l'IW index ottenuto come distanza euclidea tra i paesi nella mappa culturale di Inglehart-Welzel, mentre le altre distanze prendono in considerazione la condizione climatica, l'etnia e la lingua, la genetica ed il recente fenomeno di Facebook. Infine, questo Capitolo considera tutte le misure di distanza all\u2019interno di un Social Relations Regression Model (SRRM) che stima la distanza tra i paesi in base al PIL pro capite (anno 2017). Il risultato finale mostra che le distanze culturali sono poco correlate con le distanze prese dalla letteratura, e quando si trova un compromesso tra di loro, di solito la Compromise_Large \ue8 caratterizzata da un peso leggermente superiore. La conclusione principale riguarda l'importante potere esplicativo della distanza Compromise_Large sulla distanza in PIL pro capite rispetto a quello della IW index e della Compromise_IW, la quale ha un significato intermedio tra le due. Ci\uf2 conferma l'importanza di considerare la rete culturale nazionale di interdipendenze tra tratti culturali nella definizione generale della distanza culturale, ed anche che l\u2019aggiunta di un numero maggiore di tratti culturali pu\uf2 influire nella sua specificazione, seppur i tratti culturali considerati da Ronald Inglehart e Christian Welzel nella costruzione della loro mappa culturale sembrano catturare gi\ue0 una buona parte dell\u2019informazione culturale dei paesi. La produzione abnorme di dati nel nostro tempo ha permesso l'osservazione di grandi collezioni di reti all\u2019interno di un campo di analisi specifico, le quali possono essere caratterizzate anche da una diversa dimensione l\u2019una dall\u2019altra (ad esempio si pu\uf2 pensare alla rete commerciale tra paesi di ogni prodotto). Una rete \ue8 un oggetto complesso, per cui un modo comune per analizzare e comparare congiuntamente un set di reti \ue8 ridurne la complessit\ue0 proiettandole in uno spazio ridotto attraverso i descrittori che le caratterizzano. \uc8 qui che sorge il problema analizzato nel Capitolo 3: qual \ue8 il sottoinsieme di descrittori che mantiene le caratteristiche delle reti il pi\uf9 possibile invariate nel processo di mapping, ovvero proietta in punti diversi dello spazio reti non isomorfe e raggruppa vicine reti strutturalmente simili tra di loro e lontano reti dissimili? Attraverso una simulazione di reti da quattro modelli generativi (Random, Scale-free, Small-world e Stochastic block model) e la selezione di un ampio insieme di descrittori riferenti ai livelli micro, meso e macro di analisi della rete, questo Capitolo trova tramite il metodo di Subgroup Discovery un piccolo sottoinsieme di descrittori. Questo sottoinsieme \ue8 composto da 5 descrittori: il momento primo del Coefficiente di Clustering Locale, 3 configurazioni di Motifs e il descrittore di Smallworldness. L'efficacia dei descrittori \ue8 valutata applicandoli all'insieme delle reti culturali binarie con 60 tratti culturali stimate nel Capitolo 1 e confrontando le distanze tra questi punti-rete nello spazio dei descrittori con distanze di reti popolari in letteratura. Le principali innovazioni sono due: la costruzione di un nuovo indice di distanza culturale tra i paesi, in cui \ue8 inclusa la rete culturale di interdipendenze tra tratti culturali; la selezione di un piccolo sottoinsieme efficiente di descrittori per la proiezione nello spazio di insiemi di reti binarie che possono avere grandezza diversa l\u2019una dall\u2019altra.In economic relations, in international agreements and in institutional dialogue, the word distance is one of the most enunciated. There are exogenous distances to be bridged to ignite a bond, sometimes there are necessary cracks and other times unavoidable breaks, but this may depend, as well as geographical and physical distances, and implicit interests, largely on the cultural status of groups of individuals. The quantitative evaluation of the distance between two entities is a dyadic property and as such, the presence, intensity, direction and sign of their tie is a way to undertake it. Since entities can be individuals, objects, companies, countries, planets, as well as networks referring to specific contexts, and the way to measure similarity between them is various, a peculiarity thing of distances is their changeable nature. While physical distances are almost objectively computable, in case of culture (and even other more or less broad concepts) using a method rather than another could radically change the proximity relationship between entities, especially if they have a high degree of complexity. The cultural background plays an important role in determining the socio-economic status of a country and its characterization in terms of similarity to other countries. The Chapter 1 - using data from the WVS/EVS Joint 2017 - operationalizes a definition of culture that takes into account the interdependencies between cultural traits at country level and calculates a new measure of cultural distance. Taking advantage of a recent Bayesian algorithm by Gaussian copula graphical model, this Chapter estimates for each of 76 countries included in the WVS/EVS Joint 2017, the cultural network of interdependencies between cultural traits considering different sets of them: the 6 from the first battery of questions, the 10 of the Inglehart-Welzel Cultural Map, the 14 of the Inglehart-Welzel Cultural Map, where for \u201cPost-materialism\u201d and \u201cAutonomy\u201d indices are used the variables from which they are derived, and 60 cultural traits of which, 14 as previously defined, 6 refer to the first battery of questions and the remaining 40 are selected to get a number that can cope with the trade-off between processing time and the minimum number of missing values per country. After defining the distances between countries considering both cultural networks and distributions of cultural traits, this Chapter observes via DISTATIS how the addition of the network component to the classic distributional one, substantially modifies the measure of cultural distance both in the case of a few cultural traits (6, 10 and 14) and in the case of more cultural traits (60). Finally, it affirms that the network structure of the national culture matters for the definition of the cultural distance among worldwide countries and finds two final distance measures: Compromise_Large (from 60 variables) and Compromise_IW (from the Inglehart-Welzel cultural map variables). The effect of cultural variables on the economic situation of a country or more generally of a geographically definable area, has been scoured in recent years by the economic literature. Cultural, genetic, geographical, climatic, semantic, ethnic, linguistic, political distances have often been included in econometric models as independent or control variables. The Chapter 2 follows this literature, firstly by individually comparing three measurements of cultural distance calculated in Chapter 1 with other distances used in literature together with cultural distance or as a proxy of it, and secondly by jointly comparing them (the measurements of cultural distance and those from literature) via DISTATIS. The three cultural distances are the two new measures mentioned above (Compromise_Large and Compromise_IW) and the IW index obtained as Euclidean distance between countries in the Inglehart-Welzel cultural map, while the other distances take into consideration climatic condition, ethnicity and language, genetics and the recent phenomenon of Facebook. Finally, this Chapter considers these distance measures into a Social Relations Regression Model (SRRM) which estimates the distance between countries in GDP per capita (year 2017). The final result shows that cultural distances are poorly correlated with the distances from the literature, and when a compromise is found between them, usually the Compromise_Large is characterized by a slightly higher weight. The main conclusion concerns the important explanatory power of the Compromise_Large distance on the distance in GDP per capita compared to that of the IW index and the Compromise_IW, which has an intermediate meaning between the two. This confirms the importance of considering the national cultural network of interdependencies between cultural traits in the overall definition of cultural distance, and also that the addition of more cultural traits may influence its specification, although the cultural traits considered by Inglehart and Welzel in the construction of their cultural map seem to capture already a good part of the cultural information of the countries. The abnormal production of data in our time has allowed the observation of large collections of networks within a specific field of analysis, which can also be characterized by a different size from each other, e.g. you can think of the trade network of each product between countries. A network is a complex object, so a common way to analyze and compare a set of networks is to reduce their complexity by mapping them into a space through the descriptors that characterize them. This is where the problem analyzed in Chapter 3 arises: what is the subset of descriptors that keeps the characteristics of networks as much as possible unchanged in the mapping process, namely projects non-isomorphic networks in different points of the space and groups nearby networks structurally similar and distant networks dissimilar? Through a simulation of networks from four generative models (Random, Scale-free, Small-world and Stochastic block model) and the selection of a wide set of descriptors of the micro, meso and macro-level of network analysis, this Chapter finds evidence of a small subset of descriptors via Subgroup Discovery. This subset is composed by 5 descriptors: the first moment of the Local Clustering Coefficient, 3 Motifs configurations and the descriptor of Smallworldness. The effectiveness of descriptors is evaluated by applying them to the set of binary cultural networks with 60 cultural traits estimated in Chapter 1 and comparing distances between these points-network in the space of the descriptors with popular network distances used in literature. Two are the main innovations: the construction of a new index of cultural distance among countries, in which is included the cultural network of interdependencies among cultural traits; the selection of a small efficient subset of descriptors for mapping in the space of sets of binary networks, which can also be characterized by a different size from each other

    Computer Science Logic 2018: CSL 2018, September 4-8, 2018, Birmingham, United Kingdom

    Get PDF

    A Compass to Controlled Graph Rewriting

    Get PDF
    With the growing complexity and autonomy of software-intensive systems, abstract modeling to study and formally analyze those systems is gaining on importance. Graph rewriting is an established, theoretically founded formalism for the graphical modeling of structure and behavior of complex systems. A graph-rewriting system consists of declarative rules, providing templates for potential changes in the modeled graph structures over time. Nowadays complex software systems, often involving distributedness and, thus, concurrency and reactive behavior, pose a challenge to the hidden assumption of global knowledge behind graph-based modeling; in particular, describing their dynamics by rewriting rules often involves a need for additional control to reflect algorithmic system aspects. To that end, controlled graph rewriting has been proposed, where an external control language guides the sequence in which rules are applied. However, approaches elaborating on this idea so far either have a practical, implementational focus without elaborating on formal foundations, or a pure input-output semantics without further considering concurrent and reactive notions. In the present thesis, we propose a comprehensive theory for an operational semantics of controlled graph rewriting, based on well-established notions from the theory of process calculi. In the first part, we illustrate the aforementioned fundamental phenomena by means of a simplified model of wireless sensor networks (WSN). After recapitulating the necessary background on DPO graph rewriting, the formal framework used throughout the thesis, we present an extensive survey on the state of the art in controlled graph rewriting, along the challenges which we address in the second part where we elaborate our theoretical contributions. As a novel approach, we propose a process calculus for controlled graph rewriting, called RePro, where DPO rule applications are controlled by process terms closely resembling the process calculus CCS. In particular, we address the aforementioned challenges: (i) we propose a formally founded control language for graph rewriting with an operational semantics, (ii) explicitly addressing concurrency and reactive behavior in system modeling, (iii) allowing for a proper handling of process equivalence and action independence using process-algebraic notions. Finally, we present a novel abstract verification approach for graph rewriting based on abstract interpretation of reactive systems. To that end, we propose the so-called compasses as an abstract representation of infinite graph languages and demonstrate their use for the verification of process properties over infinite input sets

    Timing in Technischen Sicherheitsanforderungen für Systementwürfe mit heterogenen Kritikalitätsanforderungen

    Get PDF
    Traditionally, timing requirements as (technical) safety requirements have been avoided through clever functional designs. New vehicle automation concepts and other applications, however, make this harder or even impossible and challenge design automation for cyber-physical systems to provide a solution. This thesis takes upon this challenge by introducing cross-layer dependency analysis to relate timing dependencies in the bounded execution time (BET) model to the functional model of the artifact. In doing so, the analysis is able to reveal where timing dependencies may violate freedom from interference requirements on the functional layer and other intermediate model layers. For design automation this leaves the challenge how such dependencies are avoided or at least be bounded such that the design is feasible: The results are synthesis strategies for implementation requirements and a system-level placement strategy for run-time measures to avoid potentially catastrophic consequences of timing dependencies which are not eliminated from the design. Their applicability is shown in experiments and case studies. However, all the proposed run-time measures as well as very strict implementation requirements become ever more expensive in terms of design effort for contemporary embedded systems, due to the system's complexity. Hence, the second part of this thesis reflects on the design aspect rather than the analysis aspect of embedded systems and proposes a timing predictable design paradigm based on System-Level Logical Execution Time (SL-LET). Leveraging a timing-design model in SL-LET the proposed methods from the first part can now be applied to improve the quality of a design -- timing error handling can now be separated from the run-time methods and from the implementation requirements intended to guarantee them. The thesis therefore introduces timing diversity as a timing-predictable execution theme that handles timing errors without having to deal with them in the implemented application. An automotive 3D-perception case study demonstrates the applicability of timing diversity to ensure predictable end-to-end timing while masking certain types of timing errors.Traditionell wurden Timing-Anforderungen als (technische) Sicherheitsanforderungen durch geschickte funktionale Entwürfe vermieden. Neue Fahrzeugautomatisierungskonzepte und Anwendungen machen dies jedoch schwieriger oder gar unmöglich; Aufgrund der Problemkomplexität erfordert dies eine Entwurfsautomatisierung für cyber-physische Systeme heraus. Diese Arbeit nimmt sich dieser Herausforderung an, indem sie eine schichtenübergreifende Abhängigkeitsanalyse einführt, um zeitliche Abhängigkeiten im Modell der beschränkten Ausführungszeit (BET) mit dem funktionalen Modell des Artefakts in Beziehung zu setzen. Auf diese Weise ist die Analyse in der Lage, aufzuzeigen, wo Timing-Abhängigkeiten die Anforderungen an die Störungsfreiheit auf der funktionalen Schicht und anderen dazwischenliegenden Modellschichten verletzen können. Für die Entwurfsautomatisierung ergibt sich daraus die Herausforderung, wie solche Abhängigkeiten vermieden oder zumindest so eingegrenzt werden können, dass der Entwurf machbar ist: Das Ergebnis sind Synthesestrategien für Implementierungsanforderungen und eine Platzierungsstrategie auf Systemebene für Laufzeitmaßnahmen zur Vermeidung potentiell katastrophaler Folgen von Timing-Abhängigkeiten, die nicht aus dem Entwurf eliminiert werden. Ihre Anwendbarkeit wird in Experimenten und Fallstudien gezeigt. Allerdings werden alle vorgeschlagenen Laufzeitmaßnahmen sowie sehr strenge Implementierungsanforderungen für moderne eingebettete Systeme aufgrund der Komplexität des Systems immer teurer im Entwurfsaufwand. Daher befasst sich der zweite Teil dieser Arbeit eher mit dem Entwurfsaspekt als mit dem Analyseaspekt von eingebetteten Systemen und schlägt ein Entwurfsparadigma für vorhersagbares Timing vor, das auf der System-Level Logical Execution Time (SL-LET) basiert. Basierend auf einem Timing-Entwurfsmodell in SL-LET können die vorgeschlagenen Methoden aus dem ersten Teil nun angewandt werden, um die Qualität eines Entwurfs zu verbessern -- die Behandlung von Timing-Fehlern kann nun von den Laufzeitmethoden und von den Implementierungsanforderungen, die diese garantieren sollen, getrennt werden. In dieser Arbeit wird daher Timing Diversity als ein Thema der Timing-Vorhersage in der Ausführung eingeführt, das Timing-Fehler behandelt, ohne dass sie in der implementierten Anwendung behandelt werden müssen. Anhand einer Fallstudie aus dem Automobilbereich (3D-Umfeldwahrnehmung) wird die Anwendbarkeit von Timing-Diversität demonstriert, um ein vorhersagbares Ende-zu-Ende-Timing zu gewährleisten und gleichzeitig in der Lage zu sein, bestimmte Arten von Timing-Fehlern zu maskieren

    Proceedings, MSVSCC 2014

    Get PDF
    Proceedings of the 8th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 17, 2014 at VMASC in Suffolk, Virginia
    corecore