1,019 research outputs found
Testing formula satisfaction
We study the query complexity of testing for properties defined by read once formulae, as instances of massively parametrized properties, and prove several testability and non-testability results. First we prove the testability of any property accepted by a Boolean read-once formula involving any bounded arity gates, with a number of queries exponential in \epsilon and independent of all other parameters. When the gates are limited to being monotone, we prove that there is an estimation algorithm, that outputs an approximation of the distance of the input from
satisfying the property. For formulae only involving And/Or gates, we provide a more efficient test whose query complexity is only quasi-polynomial in \epsilon. On the other hand we show that such testability results do not hold in general for formulae over non-Boolean alphabets; specifically we construct a property defined by a read-once arity 2 (non-Boolean) formula over alphabets of size 4, such that any 1/4-test for it requires a number of queries depending on the formula size
Testing read-once formula satisfaction
We study the query complexity of testing for properties defined by read once formulas, as instances of {\em massively parametrized properties}, and prove several testability and non-testability results. First we prove the testability of any property accepted by a Boolean read-once formula involving any bounded arity gates, with a number of queries exponential in , doubly exponential in the arity, and independent of all other parameters. When the gates are limited to being monotone, we prove that there is an {\em estimation} algorithm, that outputs an approximation of the distance of the input from satisfying the property. For formulas only involving And/Or gates, we provide a more efficient test whose query complexity is only quasipolynomial in . On the other hand, we show that such testability results do not hold in general for formulas over non-Boolean alphabets; specifically we construct a property defined by a read-once arity (non-Boolean) formula over an alphabet of size , such that any -test for it requires a number of queries depending on the formula size. We also present such a formula over an alphabet of size that additionally satisfies a strong monotonicity condition
GPT-PINN: Generative Pre-Trained Physics-Informed Neural Networks toward non-intrusive Meta-learning of parametric PDEs
Physics-Informed Neural Network (PINN) has proven itself a powerful tool to
obtain the numerical solutions of nonlinear partial differential equations
(PDEs) leveraging the expressivity of deep neural networks and the computing
power of modern heterogeneous hardware. However, its training is still
time-consuming, especially in the multi-query and real-time simulation
settings, and its parameterization often overly excessive. In this paper, we
propose the Generative Pre-Trained PINN (GPT-PINN) to mitigate both challenges
in the setting of parametric PDEs. GPT-PINN represents a brand-new
meta-learning paradigm for parametric systems. As a network of networks, its
outer-/meta-network is hyper-reduced with only one hidden layer having
significantly reduced number of neurons. Moreover, its activation function at
each hidden neuron is a (full) PINN pre-trained at a judiciously selected
system configuration. The meta-network adaptively ``learns'' the parametric
dependence of the system and ``grows'' this hidden layer one neuron at a time.
In the end, by encompassing a very small number of networks trained at this set
of adaptively-selected parameter values, the meta-network is capable of
generating surrogate solutions for the parametric system across the entire
parameter domain accurately and efficiently
Ten years after ImageNet: a 360° perspective on artificial intelligence
It is 10 years since neural networks made their spectacular
comeback. Prompted by this anniversary, we take a holistic
perspective on artificial intelligence (AI). Supervised learning for
cognitive tasks is effectively solved—provided we have enough
high-quality labelled data. However, deep neural network
models are not easily interpretable, and thus the debate between
blackbox and whitebox modelling has come to the fore. The rise
of attention networks, self-supervised learning, generative
modelling and graph neural networks has widened the
application space of AI. Deep learning has also propelled the
return of reinforcement learning as a core building block of
autonomous decision-making systems. The possible harms made
possible by new AI technologies have raised socio-technical
issues such as transparency, fairness and accountability. The
dominance of AI by Big Tech who control talent, computing
resources, and most importantly, data may lead to an extreme
AI divide. Despite the recent dramatic and unexpected success
in AI-driven conversational agents, progress in much-heralded
flagship projects like self-driving vehicles remains elusive. Care
must be taken to moderate the rhetoric surrounding the field
and align engineering progress with scientific principles
- …