208 research outputs found

    A complexity dichotomy for poset constraint satisfaction

    Get PDF
    In this paper we determine the complexity of a broad class of problems that extends the temporal constraint satisfaction problems. To be more precise we study the problems Poset-SAT(Φ\Phi), where Φ\Phi is a given set of quantifier-free \leq-formulas. An instance of Poset-SAT(Φ\Phi) consists of finitely many variables x1,,xnx_1,\ldots,x_n and formulas ϕi(xi1,,xik)\phi_i(x_{i_1},\ldots,x_{i_k}) with ϕiΦ\phi_i \in \Phi; the question is whether this input is satisfied by any partial order on x1,,xnx_1,\ldots,x_n or not. We show that every such problem is NP-complete or can be solved in polynomial time, depending on Φ\Phi. All Poset-SAT problems can be formalized as constraint satisfaction problems on reducts of the random partial order. We use model-theoretic concepts and techniques from universal algebra to study these reducts. In the course of this analysis we establish a dichotomy that we believe is of independent interest in universal algebra and model theory.Comment: 29 page

    Domain Theory 101 : an ideal exploration of this domain

    Get PDF
    Les problèmes logiciels sont frustrants et diminuent l’expérience utilisateur. Par exemple, la fuite de données bancaires, la publication de vidéos ou de photos compromettantes peuvent affecter gravement une vie. Comment éviter de telles situations ? Utiliser des tests est une bonne stratégie, mais certains bogues persistent. Une autre solution est d’utiliser des méthodes plus mathématiques, aussi appelées méthodes formelles. Parmi celles-ci se trouve la sémantique dénotationnelle. Elle met la sémantique extraite de vos logiciels préférés en correspondance avec des objets mathématiques. Sur ceux-ci, des propriétés peuvent être vérifiées. Par exemple, il est possible de déterminer, sous certaines conditions, si votre logiciel donnera une réponse. Pour répondre à ce besoin, il est nécessaire de s’intéresser à des théories mathématiques suffisamment riches. Parmi les candidates se trouvent le sujet de ce mémoire : la théorie des domaines. Elle offre des objets permettant de modéliser formellement les données et les instructions à l’aide de relations d’ordre. Cet écrit présente les concepts fondamentaux tout en se voulant simple à lire et didactique. Il offre aussi une base solide pour des lectures plus poussées et contient tout le matériel nécessaire à sa lecture, notamment les preuves des énoncés présentés.Bugs in programs are definitively annoying and have a negative impact on the user experience. For example, leaks of bank data or leaks of compromising videos or photos have a serious effect on someone’s life. How can we prevent these situations from happening? We can do tests, but many bugs may persist. Another way is to use mathematics, namely formal methods. Among them, there is denotational semantics. It links the semantics of your favorite program to mathematical objects. On the latter, we can verify properties, e.g., absence of bugs. Hence, we need a rich theory in which we can express the denotational semantics of programs. Domain Theory is a good candidate and is the main subject of this master thesis. It provides mathematical objects for data and instructions based on order relations. This thesis presents fundamental concepts in a simple and pedagogical way. It is a solid basis for advanced readings as well as containing all the needed knowledge for its reading, notably proofs for all presented statements

    A study of discrepancy results in partially ordered sets

    Get PDF
    In 2001, Fishburn, Tanenbaum, and Trenk published a pair of papers that introduced the notions of linear and weak discrepancy of a partially ordered set or poset. Linear discrepancy for a poset is the least k such that for any ordering of the points in the poset there is a pair of incomparable points at least distance k away in the ordering. Weak discrepancy is similar to linear discrepancy except that the distance is observed over weak labelings (i.e. two points can have the same label if they are incomparable, but order is still preserved). My thesis gives a variety of results pertaining to these properties and other forms of discrepancy in posets. The first chapter of my thesis partially answers a question of Fishburn, Tanenbaum, and Trenk that was to characterize those posets with linear discrepancy two. It makes the characterization for those posets with width two and references the paper where the full characterization is given. The second chapter introduces the notion of t-discrepancy which is similar to weak discrepancy except only the weak labelings with at most t copies of any label are considered. This chapter shows that determining a poset's t-discrepancy is NP-Complete. It also gives the t-discrepancy for the disjoint sum of chains and provides a polynomial time algorithm for determining t-discrepancy of semiorders. The third chapter presents another notion of discrepancy namely total discrepancy which minimizes the average distance between incomparable elements. This chapter proves that finding this value can be done in polynomial time unlike linear discrepancy and t-discrepancy. The final chapter answers another question of Fishburn, Tanenbaum, and Trenk that asked to characterize those posets that have equal linear and weak discrepancies. Though determining the answer of whether the weak discrepancy and linear discrepancy of a poset are equal is an NP-Complete problem, the set of minimal posets that have this property are given. At the end of the thesis I discuss two other open problems not mentioned in the previous chapters that relate to linear discrepancy. The first asks if there is a link between a poset's dimension and its linear discrepancy. The second refers to approximating linear discrepancy and possible ways to do it.Ph.D.Committee Chair: Trotter, William T.; Committee Member: Dieci, Luca; Committee Member: Duke, Richard; Committee Member: Randall, Dana; Committee Member: Tetali, Prasa

    Resource theories of knowledge

    Full text link
    How far can we take the resource theoretic approach to explore physics? Resource theories like LOCC, reference frames and quantum thermodynamics have proven a powerful tool to study how agents who are subject to certain constraints can act on physical systems. This approach has advanced our understanding of fundamental physical principles, such as the second law of thermodynamics, and provided operational measures to quantify resources such as entanglement or information content. In this work, we significantly extend the approach and range of applicability of resource theories. Firstly we generalize the notion of resource theories to include any description or knowledge that agents may have of a physical state, beyond the density operator formalism. We show how to relate theories that differ in the language used to describe resources, like micro and macroscopic thermodynamics. Finally, we take a top-down approach to locality, in which a subsystem structure is derived from a global theory rather than assumed. The extended framework introduced here enables us to formalize new tasks in the language of resource theories, ranging from tomography, cryptography, thermodynamics and foundational questions, both within and beyond quantum theory.Comment: 28 pages featuring figures, examples, map and neatly boxed theorems, plus appendi

    Transitive closures and openings of reciprocal relations

    Get PDF

    Constructive Fuzzy Logics

    Get PDF
    We generalise Kripke’s semantics for Intuitionistic logic to Hajek’s BL and consider the constructive subsystems of GBLewf and Intuitionistic Affine logic or ALi. The genesis of our semantics is the Poset Product construction for GBL-algebras elucidated in a series of papers by Peter Jipsen, Simone Bova, and Franco Montagna. We present natural deduction systems for all of these systems and corresponding deduction theorems for these same. We present the algebraic semantics for each of the logics under consideration, demonstrate their soundness and completeness with respect to these algebraic semantics. We also show how the classical Kripke semantics for Intuitionistic logic can be recast in terms of Poset Products. We then proceed to the main results, showing how a very natural generalisation of the Kripke semantics holds for each of GBLewf , ALi and Hajek’s BL based on the embedding results of Jipsen and Montagna and the decidability results of Bova and Montagna. We demonstrate soundness and completeness of the logics under our semantics in each case, with the exception of ALi, whose robust completeness with respect to the intended models (relational models with frames valued in involutive pocrims) we leave as an open problem for the ambitious reader

    Geoinformatic methodologies and quantitative tools for detecting hotspots and for multicriteria ranking and prioritization: application on biodiversity monitoring and conservation

    Get PDF
    Chi ha la responsabilità di gestire un’area protetta non solo deve essere consapevole dei problemi ambientali dell’area ma dovrebbe anche avere a disposizione dati aggiornati e appropriati strumenti metodologici per esaminare accuratamente ogni singolo problema. In effetti, il decisore ambientale deve organizzare in anticipo le fasi necessarie a fronteggiare le prevedibili variazioni che subirà la pressione antropica sulle aree protette. L’obiettivo principale della Tesi è di natura metodologica e riguarda il confronto tra differenti metodi statistici multivariati utili per l’individuazione di punti critici nello spazio e per l’ordinamento degli “oggetti ambientali” di studio e quindi per l’individuazione delle priorità di intervento ambientale. L’obiettivo ambientale generale è la conservazione del patrimonio di biodiversità. L’individuazione, tramite strumenti statistici multivariati, degli habitat aventi priorità ecologica è solamente il primo fondamentale passo per raggiungere tale obiettivo. L’informazione ecologica, integrata nel contesto antropico, è un successivo essenziale passo per effettuare valutazioni ambientali e per pianificare correttamente le azioni volte alla conservazione. Un’ampia serie di dati ed informazioni è stata necessaria per raggiungere questi obiettivi di gestione ambientale. I dati ecologici sono forniti dal Ministero dell’Ambiente Italiano e provengono al Progetto “Carta della Natura” del Paese. I dati demografici sono invece forniti dall’Istituto Italiano di Statistica (ISTAT). I dati si riferiscono a due aree geografiche italiane: la Val Baganza (Parma) e l’Oltrepò Pavese e Appennino Ligure-Emiliano. L’analisi è stata condotta a due differenti livelli spaziali: ecologico-naturalistico (l’habitat) e amministrativo (il Comune). Corrispondentemente, i risultati più significativi ottenuti sono: 1. Livello habitat: il confronto tra due metodi di ordinamento e determinazione delle priorità, il metodo del Vettore Ideale e quello della Preminenza, tramite l’utilizzo di importanti metriche ecologiche come il Valore Ecologico (E.V.) e la Sensibilità Ecologica (E.S.), fornisce dei risultati non direttamente comparabili. Il Vettore Ideale, non essendo un procedimento basato sulla ranghizzazione dei valori originali, sembra essere preferibile nel caso di paesaggi molto eterogenei in senso spaziale. Invece, il metodo della Preminenza probabilmente è da preferire in paesaggi ecologici aventi un basso grado di eterogeneità intesa nel senso di differenze non troppo grandi nel E.V. ed E.S. degli habitat. 2. Livello comunale: Al fine di prendere delle decisioni gestionali ed essendo gli habitat solo delle suddivisioni naturalistiche di un dato territorio, è necessario spostare l’attenzione sulle corrispondenti unità amministrative territoriali (i Comuni). Da questo punto di vista, l’introduzione della demografia risulta essere un elemento centrale oltre che di novità nelle analisi ecologico-ambientali. In effetti, l’analisi demografica rende il risultato di cui al punto 1 molto più realistico introducendo altre dimensioni (la pressione antropica attuale e le sue tendenze) che permettono l’individuazione di aree ecologicamente fragili. Inoltre, tale approccio individua chiaramente le responsabilità ambientali di ogni singolo ente territoriale nei riguardi della difesa della biodiversità. In effetti un ordinamento dei Comuni sulla base delle caratteristiche ambientali e demografiche, chiarisce le responsabilità gestionali di ognuno di essi. Un’applicazione concreta di questa necessaria quanto utile integrazione di dati ecologici e demografici viene discussa progettando una Rete Ecologica (E.N.). La Rete cosi ottenuta infatti presenta come elemento di novità il fatto di non essere “statica” bensì “dinamica” nel senso che la sua pianificazione tiene in considerazione il trend di pressione antropica al fine di individuare i probabili punti di futura fragilità e quindi di più critica gestione.Who has the responsibility to manage a conservation zone, not only must be aware of environmental problems but should have at his disposal updated databases and appropriate methodological instruments to examine carefully each individual case. In effect he has to arrange, in advance, the necessary steps to withstand the foreseeable variations in the trends of human pressure on conservation zones. The essential objective of this Thesis is methodological that is to compare different multivariate statistical methods useful for environmental hotspot detection and for environmental prioritization and ranking. The general environmental goal is the conservation of the biodiversity patrimony. The individuation, through multidimensional statistical tools, of habitats having top ecological priority, is only the first basic step to accomplish this aim. Ecological information integrated in the human context is an essential further step to make environmental evaluations and to plan correct conservation actions. A wide series of data and information has been necessary to accomplish environmental management tasks. Ecological data are provided by the Italian Ministry of the Environment and they refer to the Map of Italian Nature Project database. The demographic data derives from the Italian Institute of Statistics (ISTAT). The data utilized regards two Italian areas: Baganza Valley and Oltrepò Pavese and Ligurian-Emilian Apennine. The analysis has been carried out at two different spatial/scale levels: ecological-naturalistic (habitat level) and administrative (Commune level). Correspondingly, the main obtained results are: 1. Habitat level: comparing two ranking and prioritization methods, Ideal Vector and Salience, through important ecological metrics like Ecological Value (E.V.) and Ecological Sensitivity (E.S.), gives results not directly comparable. Being not based on a ranking process, Ideal Vector method seems to be used preferentially in landscapes characterized by high spatial heterogeneity. On the contrary, Salience method is probably to be preferred in ecological landscapes characterized by a low degree of heterogeneity in terms of not large differences concerning habitat E.V. and E.S.. 2. Commune level: Being habitat only a naturalistic partition of a given territory, it is necessary, for management decisions, to move towards the corresponding administrative units (Communes). From this point of view, the introduction of demography is an essential element of novelty in environmental analysis. In effect, demographic analysis makes the goal at point 1 more realistic introducing other dimensions (actual human pressure and its trend) which allows the individuation of environmentally fragile areas. Furthermore this approach individuates clearly the environmental responsibility of each administrative body for what concerns the biodiversity conservation. In effect communes’ ranking, according to environmental/demographic features, clarify the responsibilities of each administrative body. A concrete application of this necessary and useful integration of ecological and demographic data has been developed in designing an Ecological Network (E.N.).The obtained E.N. has the novelty to be not “static” but “dynamic” that is the network planning take into account the demographic pressure trends in the individuation of the probable future fragile points
    corecore