172 research outputs found
Towards Optimal Free Trade Agreement Utilization through Deep Learning Techniques
In recent years, deep learning based methods achieved new state of the art in various domains such as image recognition, speech recognition and natural language processing. However, in the context of tax and customs, the amount of existing applications of artificial intelligence and more specifically deep learning is limited. In this paper, we investigate the potentials of deep learning techniques to improve the Free Trade Agreement (FTA) utilization of trade transactions. We show that supervised learning models can be trained to decide on the basis of transaction characteristics such as import country, export country, product type, etc. whether FTA can be utilized. We apply a specific architecture with multiple embeddings to efficiently capture the dynamics of tabular data. The experiments were evaluated on real-world data generated by Enterprise Resource Planning (ERP) systems of an international chemical and consumer goods company
Complementary First and Second Derivative Methods for Ansatz Optimization in Variational Monte Carlo
We present a comparison between a number of recently introduced low-memory
wave function optimization methods for variational Monte Carlo in which we find
that first and second derivative methods possess strongly complementary
relative advantages. While we find that low-memory variants of the linear
method are vastly more efficient at bringing wave functions with disparate
types of nonlinear parameters to the vicinity of the energy minimum,
accelerated descent approaches are then able to locate the precise minimum with
less bias and lower statistical uncertainty. By constructing a simple hybrid
approach that combines these methodologies, we show that all of these
advantages can be had at once when simultaneously optimizing large determinant
expansions, molecular orbital shapes, traditional Jastrow correlation factors,
and more nonlinear many-electron Jastrow factors
Optimizing Counterdiabaticity by Variational Quantum Circuits
Utilizing counterdiabatic (CD) driving - aiming at suppression of diabatic
transition - in digitized adiabatic evolution have garnered immense interest in
quantum protocols and algorithms. However, improving the approximate CD terms
with a nested commutator ansatz is a challenging task. In this work, we propose
a technique of finding optimal coefficients of the CD terms using a variational
quantum circuit. By classical optimizations routines, the parameters of this
circuit are optimized to provide the coefficients corresponding to the CD
terms. Then their improved performance is exemplified in
Greenberger-Horne-Zeilinger state preparation on nearest-neighbor Ising model.
Finally, we also show the advantage over the usual quantum approximation
optimization algorithm, in terms of fidelity with bounded time.Comment: 7 pages, 5 figures, accepted for publication in the upcoming theme
issue of Philosophical Transactions
MetaSymNet: A Dynamic Symbolic Regression Network Capable of Evolving into Arbitrary Formulations
Mathematical formulas serve as the means of communication between humans and
nature, encapsulating the operational laws governing natural phenomena. The
concise formulation of these laws is a crucial objective in scientific research
and an important challenge for artificial intelligence (AI). While traditional
artificial neural networks (MLP) excel at data fitting, they often yield
uninterpretable black box results that hinder our understanding of the
relationship between variables x and predicted values y. Moreover, the fixed
network architecture in MLP often gives rise to redundancy in both network
structure and parameters. To address these issues, we propose MetaSymNet, a
novel neural network that dynamically adjusts its structure in real-time,
allowing for both expansion and contraction. This adaptive network employs the
PANGU meta function as its activation function, which is a unique type capable
of evolving into various basic functions during training to compose
mathematical formulas tailored to specific needs. We then evolve the neural
network into a concise, interpretable mathematical expression. To evaluate
MetaSymNet's performance, we compare it with four state-of-the-art symbolic
regression algorithms across more than 10 public datasets comprising 222
formulas. Our experimental results demonstrate that our algorithm outperforms
others consistently regardless of noise presence or absence. Furthermore, we
assess MetaSymNet against MLP and SVM regarding their fitting ability and
extrapolation capability, these are two essential aspects of machine learning
algorithms. The findings reveal that our algorithm excels in both areas.
Finally, we compared MetaSymNet with MLP using iterative pruning in network
structure complexity. The results show that MetaSymNet's network structure
complexity is obviously less than MLP under the same goodness of fit.Comment: 16 page
Meta-Learning for Symbolic Hyperparameter Defaults
Hyperparameter optimization in machine learning (ML) deals with the problem
of empirically learning an optimal algorithm configuration from data, usually
formulated as a black-box optimization problem. In this work, we propose a
zero-shot method to meta-learn symbolic default hyperparameter configurations
that are expressed in terms of the properties of the dataset. This enables a
much faster, but still data-dependent, configuration of the ML algorithm,
compared to standard hyperparameter optimization approaches. In the past,
symbolic and static default values have usually been obtained as hand-crafted
heuristics. We propose an approach of learning such symbolic configurations as
formulas of dataset properties from a large set of prior evaluations on
multiple datasets by optimizing over a grammar of expressions using an
evolutionary algorithm. We evaluate our method on surrogate empirical
performance models as well as on real data across 6 ML algorithms on more than
100 datasets and demonstrate that our method indeed finds viable symbolic
defaults.Comment: Pieter Gijsbers and Florian Pfisterer contributed equally to the
paper. V1: Two page GECCO poster paper accepted at GECCO 2021. V2: The
original full length paper (8 pages) with appendi
Automated Refactoring of Nested-IF Formulae in Spreadsheets
Spreadsheets are the most popular end-user programming software, where
formulae act like programs and also have smells. One well recognized common
smell of spreadsheet formulae is nest-IF expressions, which have low
readability and high cognitive cost for users, and are error-prone during reuse
or maintenance. However, end users usually lack essential programming language
knowledge and skills to tackle or even realize the problem. The previous
research work has made very initial attempts in this aspect, while no effective
and automated approach is currently available.
This paper firstly proposes an AST-based automated approach to systematically
refactoring nest-IF formulae. The general idea is two-fold. First, we detect
and remove logic redundancy on the AST. Second, we identify higher-level
semantics that have been fragmented and scattered, and reassemble the syntax
using concise built-in functions. A comprehensive evaluation has been conducted
against a real-world spreadsheet corpus, which is collected in a leading IT
company for research purpose. The results with over 68,000 spreadsheets with 27
million nest-IF formulae reveal that our approach is able to relieve the smell
of over 99\% of nest-IF formulae. Over 50% of the refactorings have reduced
nesting levels of the nest-IFs by more than a half. In addition, a survey
involving 49 participants indicates that for most cases the participants prefer
the refactored formulae, and agree on that such automated refactoring approach
is necessary and helpful
- …