59 research outputs found

    Fast Monotone Summation over Disjoint Sets

    Full text link
    We study the problem of computing an ensemble of multiple sums where the summands in each sum are indexed by subsets of size pp of an nn-element ground set. More precisely, the task is to compute, for each subset of size qq of the ground set, the sum over the values of all subsets of size pp that are disjoint from the subset of size qq. We present an arithmetic circuit that, without subtraction, solves the problem using O((np+nq)logn)O((n^p+n^q)\log n) arithmetic gates, all monotone; for constant pp, qq this is within the factor logn\log n of the optimal. The circuit design is based on viewing the summation as a "set nucleation" task and using a tree-projection approach to implement the nucleation. Applications include improved algorithms for counting heaviest kk-paths in a weighted graph, computing permanents of rectangular matrices, and dynamic feature selection in machine learning

    Graph Complexity and Slice Functions

    Full text link
    Abstract. A graph-theoretic approach to study the complexity of Boolean functions was initiated by Pudlák, Rödl, and Savický [PRS] by defining models of computation on graphs. These models generalize well-known models of Boolean complexity such as circuits, branching programs, and two-party communication complexity.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/42364/1/30360071.pd

    Robustly Separating the Arithmetic Monotone Hierarchy via Graph Inner-Product

    Get PDF

    Graph and Hypergraph Decompositions for Exact Algorithms

    Get PDF
    This thesis studies exact exponential and fixed-parameter algorithms for hard graph and hypergraph problems. Specifically, we study two techniques that can be used in the development of such algorithms: (i) combinatorial decompositions of both the input instance and the solution, and (ii) evaluation of multilinear forms over semirings. In the first part of the thesis we develop new algorithms for graph and hypergraph problems based on techniques (i) and (ii). While these techniques are independently both useful, the work presented in this part is largely characterised by their joint application. That is, combining results from different pieces of the decompositions often takes the from of multilinear form evaluation task, and on the other hand, decompositions offer the basic structure for dynamic-programming-style algorithms for the evaluation of multilinear forms. As main positive results of the first part, we give algorithms for three different problem families. First, we give a fast evaluation algorithm for linear forms defined by a disjointness matrix of small sets. This can be applied to obtain faster algorithms for counting maximum-weight objects of small size, such as k-paths in graphs. Second, we give a general framework for exponential-time algorithms for finding maximum-weight subgraphs of bounded tree-width, based on the theory of tree decompositions. Besides basic combinatorial problems, this framework has applications in learning Bayesian network structures. Third, we give a fixed-parameter algorithm for finding unbalanced vertex cuts, that is, vertex cuts that separate a small number of vertices from the rest of the graph. In the second part of the thesis we consider aspects of the complexity theory of linear forms over semirings, in order to better understand technique (ii). Specifically, we study how the presence of different algebraic catalysts in the ground semiring affects the complexity. As the main result, we show that there are linear forms that are easy to compute over semirings with idempotent addition, but difficult to compute over rings, unless the strong exponential time hypothesis fails.Yksi tietojenkäsittelytieteen perustavista tavoitteista on tehokkaiden algoritmien kehittäminen. Teoreettisesta näkökulmasta algoritmia yleensä pidetään tehokkaana mikäli sen ajoaika riippuu polynomisesti syötteen koosta. On kuitenkin laskennallisia ongelmia, joihin ei ole olemassa polynomiaikaisia algoritmeja. Esimerkiksi NP-kovia ongelmia ei voi ratkaista polynomisessa ajassa, mikäli yleinen vaativuusolettamus P ≠ NP pitää paikkansa. Tästä huolimatta haluaisimme kuitenkin usein ratkaista tällaisia vaikeita ongelmia. Kaksi yleistä lähestymistapaa vaikeiden, polynomisessa ajassa ratkeamattomien ongelmien tarkkaan ratkaisemiseen on (i) eksponentiaalinen algoritmiikka ja (ii) parametrisoitu algoritmiikka. Eksponentiaaliaikaisessa algoritmiikassa kehitetään algoritmeja, joiden ajoaika on edelleen eksponentiaalinen syötteen koon suhteen, mutta jotka välttävät koko ratkaisuavaruuden läpikäynnin; toisin sanoen, kyse on vähemmän eksponentiaalisten algoritmien kehittämisestä. Parametrisoitu algoritmiikka puolestaan pyrkii eristämään eksponentiaaliaikaisen riippuvuuden ajoajassa syötteen koosta riippumattomaan parametriin. Tässä väitöstyössä esitetään eksponentiaaliaikaisia ja parametrisoituja algoritmeja erinäisten vaikeiden verkko- ja hyperverkko-ongelmien tarkkaan ratkaisemiseen. Esitetyt algoritmit perustuvat kahteen algoritmiseen tekniikkaan: (i) monilineaarimuotojen evaluoiminen yli erilaisten puolirengaiden ja (ii) kombinatoristen hajotelmien käyttö. Algoritmien lisäksi työssä tarkastellaan näihin tekniikoihin liittyviä vaativuusteoreettisia kysymyksiä, mikä auttaa ymmärtämään tekniikoiden rajoituksia ja toistaiseksi hyödyntämättömiä mahdollisuuksia

    Fast M\"obius and Zeta Transforms

    Full text link
    M\"obius inversion of functions on partially ordered sets (posets) P\mathcal{P} is a classical tool in combinatorics. For finite posets it consists of two, mutually inverse, linear transformations called zeta and M\"obius transform, respectively. In this paper we provide novel fast algorithms for both that require O(nk)O(nk) time and space, where n=Pn = |\mathcal{P}| and kk is the width (length of longest antichain) of P\mathcal{P}, compared to O(n2)O(n^2) for a direct computation. Our approach assumes that P\mathcal{P} is given as directed acyclic graph (DAG) (E,P)(\mathcal{E}, \mathcal{P}). The algorithms are then constructed using a chain decomposition for a one time cost of O(E+Eredk)O(|\mathcal{E}| + |\mathcal{E}_\text{red}| k), where Ered\mathcal{E}_\text{red} is the number of edges in the DAG's transitive reduction. We show benchmarks with implementations of all algorithms including parallelized versions. The results show that our algorithms enable M\"obius inversion on posets with millions of nodes in seconds if the defining DAGs are sufficiently sparse.Comment: 16 pages, 7 figures, submitted for revie

    Pessimistic Bayesianism for conservative optimization and imitation

    Get PDF
    Subject to several assumptions, sufficiently advanced reinforcement learners would likely face an incentive and likely have an ability to intervene in the provision of their reward, with catastrophic consequences. In this thesis, I develop a theory of pessimism and show how it can produce safe advanced artificial agents. Not only do I demonstrate that the assumptions mentioned above can be avoided; I prove theorems which demonstrate safety. First, I develop an idealized pessimistic reinforcement learner. For any given novel event that a mentor would never cause, a sufficiently pessimistic reinforcement learner trained with the help of that mentor would probably avoid causing it. This result is without precedent in the literature. Next, on similar principles, I develop an idealized pessimistic imitation learner. If the probability of an event when the demonstrator acts can be bounded above, then the probability can be bounded above when the imitator acts instead; this kind of result is unprecedented when the imitator learns online and the environment never resets. In an environment that never resets, no one has previously demonstrated, to my knowledge, that an imitation learner even exists. Finally, both of the agents above demand more efficient algorithms for high-quality uncertainty quantification, so I have developed a new kernel for Gaussian process modelling that allows for log-linear time complexity and linear space complexity, instead of a naïve cubic time complexity and quadratic space complexity. This is not the first Gaussian process with this time complexity—inducing points methods have linear complexity—but we do outperform such methods significantly on regression benchmarks, as one might expect given the much higher dimensionality of our kernel. This thesis shows the viability of pessimism with respect to well-quantified epistemic uncertainty as a path to safe artificial agency

    Deception detection in dialogues

    Get PDF
    In the social media era, it is commonplace to engage in written conversations. People sometimes even form connections across large distances, in writing. However, human communication is in large part non-verbal. This means it is now easier for people to hide their harmful intentions. At the same time, people can now get in touch with more people than ever before. This puts vulnerable groups at higher risk for malevolent interactions, such as bullying, trolling, or predatory behavior. Furthermore, such growing behaviors have most recently led to waves of fake news and a growing industry of deceit creators and deceit detectors. There is now an urgent need for both theory that explains deception and applications that automatically detect deception. In this thesis I address this need with a novel application that learns from examples and detects deception reliably in natural-language dialogues. I formally define the problem of deception detection and identify several domains where it is useful. I introduce and evaluate new psycholinguistic features of deception in written dialogues for two datasets. My results shed light on the connection between language, deception, and perception. They also underline the challenges and difficulty of assessing perceptions from written text. To automatically learn to detect deception I first introduce an expressive logical model and then present a probabilistic model that simplifies the first and is learnable from labeled examples. I introduce a belief-over-belief formalization, based on Kripke semantics and situation calculus. I use an observation model to describe how utterances are produced from the nested beliefs and intentions. This allows me to easily make inferences about these beliefs and intentions given utterances, without needing to explicitly represent perlocutions. The agents’ belief states are filtered with the observed utterances, resulting in an updated Kripke structure. I then translate my formalization to a practical system that can learn from a small dataset and is able to perform well using very little structural background knowledge in the form of a relational dynamic Bayesian network structure

    Linguistic-based Patterns for Figurative Language Processing: The Case of Humor Recognition and Irony Detection

    Full text link
    El lenguaje figurado representa una de las tareas más difíciles del procesamiento del lenguaje natural. A diferencia del lenguaje literal, el lenguaje figurado hace uso de recursos lingüísticos tales como la ironía, el humor, el sarcasmo, la metáfora, la analogía, entre otros, para comunicar significados indirectos que la mayoría de las veces no son interpretables sólo en términos de información sintáctica o semántica. Por el contrario, el lenguaje figurado refleja patrones del pensamiento que adquieren significado pleno en contextos comunicativos y sociales, lo cual hace que tanto su representación lingüística, así como su procesamiento computacional, se vuelvan tareas por demás complejas. En este contexto, en esta tesis de doctorado se aborda una problemática relacionada con el procesamiento del lenguaje figurado a partir de patrones lingüísticos. En particular, nuestros esfuerzos se centran en la creación de un sistema capaz de detectar automáticamente instancias de humor e ironía en textos extraídos de medios sociales. Nuestra hipótesis principal se basa en la premisa de que el lenguaje refleja patrones de conceptualización; es decir, al estudiar el lenguaje, estudiamos tales patrones. Por tanto, al analizar estos dos dominios del lenguaje figurado, pretendemos dar argumentos respecto a cómo la gente los concibe, y sobre todo, a cómo esa concepción hace que tanto humor como ironía sean verbalizados de una forma particular en diversos medios sociales. En este contexto, uno de nuestros mayores intereses es demostrar cómo el conocimiento que proviene del análisis de diferentes niveles de estudio lingüístico puede representar un conjunto de patrones relevantes para identificar automáticamente usos figurados del lenguaje. Cabe destacar que contrario a la mayoría de aproximaciones que se han enfocado en el estudio del lenguaje figurado, en nuestra investigación no buscamos dar argumentos basados únicamente en ejemplos prototípicos, sino en textos cuyas característicasReyes Pérez, A. (2012). Linguistic-based Patterns for Figurative Language Processing: The Case of Humor Recognition and Irony Detection [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/16692Palanci
    corecore