3,719 research outputs found
On the non-efficient PAC learnability of conjunctive queries
This note serves three purposes: (i) we provide a self-contained exposition of the fact that conjunctive queries are not efficiently learnable in the Probably-Approximately-Correct (PAC) model, paying clear attention to the complicating fact that this concept class lacks the polynomial-size fitting property, a property that is tacitly assumed in much of the computational learning theory literature; (ii) we establish a strong negative PAC learnability result that applies to many restricted classes of conjunctive queries (CQs), including acyclic CQs for a wide range of notions of acyclicity; (iii) we show that CQs (and UCQs) are efficiently PAC learnable with membership queries.<p/
Cyclic proof systems for modal fixpoint logics
This thesis is about cyclic and ill-founded proof systems for modal fixpoint logics, with and without explicit fixpoint quantifiers.Cyclic and ill-founded proof-theory allow proofs with infinite branches or paths, as long as they satisfy some correctness conditions ensuring the validity of the conclusion. In this dissertation we design a few cyclic and ill-founded systems: a cyclic one for the weak Grzegorczyk modal logic K4Grz, based on our explanation of the phenomenon of cyclic companionship; and ill-founded and cyclic ones for the full computation tree logic CTL* and the intuitionistic linear-time temporal logic iLTL. All systems are cut-free, and the cyclic ones for K4Grz and iLTL have fully finitary correctness conditions.Lastly, we use a cyclic system for the modal mu-calculus to obtain a proof of the uniform interpolation property for the logic which differs from the original, automata-based one
A hybrid RBF neural network based model for day-ahead prediction of photovoltaic plant power output
Renewable energy resources like solar power contribute greatly to decreasing emissions of carbon dioxide and substituting generators fueled by fossil fuels. Due to the unpredictable and intermittent nature of solar power production as a result of solar radiance and other weather conditions, it is very difficult to integrate solar power into conventional power systems operation economically in a reliable manner, which would emphasize demand for accurate prediction techniques. The study proposes and applies a revised radial basis function neural network (RBFNN) scheme to predict the short-term power output of photovoltaic plant in a day-ahead prediction manner. In the proposed method, the linear as well as non-linear variables in the RBFNN scheme are efficiently trained using the whale optimization algorithm to speed the convergence of prediction results. A nonlinear benchmark function has also been used to validate the suggested scheme, which was also used in predicting the power output of solar energy for a well-designed experiment. A comparison study case generating different outcomes shows that the suggested approach could provide a higher level of prediction precision than other methods in similar scenarios, which suggests the proposed method can be used as a more suitable tool to deal such solar energy forecasting issues
Algorithms and complexity for approximately counting hypergraph colourings and related problems
The past decade has witnessed advancements in designing efficient algorithms for approximating the number of solutions to constraint satisfaction problems (CSPs), especially in the local lemma regime. However, the phase transition for the computational tractability is not known. This thesis is dedicated to the prototypical problem of this kind of CSPs, the hypergraph colouring. Parameterised by the number of colours q, the arity of each hyperedge k, and the vertex maximum degree Î, this problem falls into the regime of LovĂĄsz local lemma when Î âČ qá”. In prior, however, fast approximate counting algorithms exist when Î âČ qá”/Âł, and there is no known inapproximability result. In pursuit of this, our contribution is two-folded, stated as follows.
âą When q, k â„ 4 are evens and Î â„ 5·qá”/ÂČ, approximating the number of hypergraph colourings is NP-hard.
âą When the input hypergraph is linear and Î âČ qá”/ÂČ, a fast approximate counting algorithm does exist
Supporting the executability of R markdown files
R Markdown files are examples of literate programming documents that combine R code
with results and explanations. Such dynamic documents are designed to execute easily and
reproduce study results. However, little is known about the executability of R Markdown
files which can cause frustration among its users who intend to reuse the document. This
thesis aims to understand the executability of R Markdown files and improve the current
state of supporting the executability of those files.
Towards this direction, a large-scale study has been conducted on the executability of
R Markdown files collected from GitHub repositories. Results from the study show that a
significant number of R Markdown files (64.95%) are not executable, even after our best
efforts. To better understand the challenges, the exceptions encountered while executing
the files are categorized into different categories and a classifier is developed to determine
which Markdown files are likely to be executable. Such a classifier can be utilized by search
engines in their ranking which helps developers to find literate programming documents as
learning resources. To support the executability of R Markdown files a command-line tool
is developed. Such a tool can find issues in R Markdown files that prevent the executability
of those files. Using an R Markdown file as an input, the tool generates an intuitive list
of outputs that assist developers in identifying areas that require attention to ensure the
executability of the file. The tool not only utilizes static analysis of source code but also uses
a carefully crafted knowledge base of package dependencies to generate version constraints
of involved packages and a Satisfiability Modulo Theories (SMT) solver (i.e., Z3) to identify
compatible versions of those packages. Findings from this research can help developers
reuse R Markdown files easily, thus improving the productivity of developers. [...
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
A Comprehensive Survey on Applications of Transformers for Deep Learning Tasks
Transformer is a deep neural network that employs a self-attention mechanism
to comprehend the contextual relationships within sequential data. Unlike
conventional neural networks or updated versions of Recurrent Neural Networks
(RNNs) such as Long Short-Term Memory (LSTM), transformer models excel in
handling long dependencies between input sequence elements and enable parallel
processing. As a result, transformer-based models have attracted substantial
interest among researchers in the field of artificial intelligence. This can be
attributed to their immense potential and remarkable achievements, not only in
Natural Language Processing (NLP) tasks but also in a wide range of domains,
including computer vision, audio and speech processing, healthcare, and the
Internet of Things (IoT). Although several survey papers have been published
highlighting the transformer's contributions in specific fields, architectural
differences, or performance evaluations, there is still a significant absence
of a comprehensive survey paper encompassing its major applications across
various domains. Therefore, we undertook the task of filling this gap by
conducting an extensive survey of proposed transformer models from 2017 to
2022. Our survey encompasses the identification of the top five application
domains for transformer-based models, namely: NLP, Computer Vision,
Multi-Modality, Audio and Speech Processing, and Signal Processing. We analyze
the impact of highly influential transformer-based models in these domains and
subsequently classify them based on their respective tasks using a proposed
taxonomy. Our aim is to shed light on the existing potential and future
possibilities of transformers for enthusiastic researchers, thus contributing
to the broader understanding of this groundbreaking technology
Polynomial Identity Testing and the Ideal Proof System: PIT is in NP if and only if IPS can be p-simulated by a Cook-Reckhow proof system
The Ideal Proof System (IPS) of Grochow & Pitassi (FOCS 2014, J. ACM, 2018)
is an algebraic proof system that uses algebraic circuits to refute the
solvability of unsatisfiable systems of polynomial equations. One potential
drawback of IPS is that verifying an IPS proof is only known to be doable using
Polynomial Identity Testing (PIT), which is solvable by a randomized algorithm,
but whose derandomization, even into NSUBEXP, is equivalent to strong lower
bounds. However, the circuits that are used in IPS proofs are not arbitrary,
and it is conceivable that one could get around general PIT by leveraging some
structure in these circuits. This proposal may be even more tempting when IPS
is used as a proof system for Boolean Unsatisfiability, where the equations
themselves have additional structure.
Our main result is that, on the contrary, one cannot get around PIT as above:
we show that IPS, even as a proof system for Boolean Unsatisfiability, can be
p-simulated by a deterministically verifiable (Cook-Reckhow) proof system if
and only if PIT is in NP. We use our main result to propose a potentially new
approach to derandomizing PIT into NP
Tackling Universal Properties of Minimal Trap Spaces of Boolean Networks
Minimal trap spaces (MTSs) capture subspaces in which the Boolean dynamics is
trapped, whatever the update mode. They correspond to the attractors of the
most permissive mode. Due to their versatility, the computation of MTSs has
recently gained traction, essentially by focusing on their enumeration. In this
paper, we address the logical reasoning on universal properties of MTSs in the
scope of two problems: the reprogramming of Boolean networks for identifying
the permanent freeze of Boolean variables that enforce a given property on all
the MTSs, and the synthesis of Boolean networks from universal properties on
their MTSs. Both problems reduce to solving the satisfiability of quantified
propositional logic formula with 3 levels of quantifiers
(). In this paper, we introduce a Counter-Example Guided
Refinement Abstraction (CEGAR) to efficiently solve these problems by coupling
the resolution of two simpler formulas. We provide a prototype relying on
Answer-Set Programming for each formula and show its tractability on a wide
range of Boolean models of biological networks.Comment: Accepted at 21st International Conference on Computational Methods in
Systems Biology (CMSB 2023
When Deep Learning Meets Polyhedral Theory: A Survey
In the past decade, deep learning became the prevalent methodology for
predictive modeling thanks to the remarkable accuracy of deep neural networks
in tasks such as computer vision and natural language processing. Meanwhile,
the structure of neural networks converged back to simpler representations
based on piecewise constant and piecewise linear functions such as the
Rectified Linear Unit (ReLU), which became the most commonly used type of
activation function in neural networks. That made certain types of network
structure \unicode{x2014}such as the typical fully-connected feedforward
neural network\unicode{x2014} amenable to analysis through polyhedral theory
and to the application of methodologies such as Linear Programming (LP) and
Mixed-Integer Linear Programming (MILP) for a variety of purposes. In this
paper, we survey the main topics emerging from this fast-paced area of work,
which bring a fresh perspective to understanding neural networks in more detail
as well as to applying linear optimization techniques to train, verify, and
reduce the size of such networks
- âŠ