24,012 research outputs found
Genetic algorithms with DNN-based trainable crossover as an example of partial specialization of general search
Universal induction relies on some general search procedure that is doomed to
be inefficient. One possibility to achieve both generality and efficiency is to
specialize this procedure w.r.t. any given narrow task. However, complete
specialization that implies direct mapping from the task parameters to
solutions (discriminative models) without search is not always possible. In
this paper, partial specialization of general search is considered in the form
of genetic algorithms (GAs) with a specialized crossover operator. We perform a
feasibility study of this idea implementing such an operator in the form of a
deep feedforward neural network. GAs with trainable crossover operators are
compared with the result of complete specialization, which is also represented
as a deep neural network. Experimental results show that specialized GAs can be
more efficient than both general GAs and discriminative models.Comment: AGI 2017 procedding, The final publication is available at
link.springer.co
Recommended from our members
Prediction of jet engine parameters for control design using genetic programming
The simulation of a jet engine behavior is widely used in many different aspects of the engine development and maintenance. Achieving high quality jet engine control systems requires the iterative use of these simulations to virtually test the performance of the engine avoiding any possible damage on the real engine. Jet engine simulations involve the use of mathematical models which are complex and may not always be available. This paper introduces an approach based on Genetic Programming (GP) to model different parameters of a small engine for control design such as the Exhaust Gas Temperature (EGT). The GP approach has no knowledge of the characteristics of the engine. Instead, the model is found by the evolution of models based on past measurements of parameters such as the pump voltage. Once the model is obtained, it is used to predict the behaviour of the jet engine one step ahead. The proposed approach is successfully applied for the simulation of a Behotec j66 jet engine and the results are presented
A Factor Graph Approach to Automated Design of Bayesian Signal Processing Algorithms
The benefits of automating design cycles for Bayesian inference-based
algorithms are becoming increasingly recognized by the machine learning
community. As a result, interest in probabilistic programming frameworks has
much increased over the past few years. This paper explores a specific
probabilistic programming paradigm, namely message passing in Forney-style
factor graphs (FFGs), in the context of automated design of efficient Bayesian
signal processing algorithms. To this end, we developed "ForneyLab"
(https://github.com/biaslab/ForneyLab.jl) as a Julia toolbox for message
passing-based inference in FFGs. We show by example how ForneyLab enables
automatic derivation of Bayesian signal processing algorithms, including
algorithms for parameter estimation and model comparison. Crucially, due to the
modular makeup of the FFG framework, both the model specification and inference
methods are readily extensible in ForneyLab. In order to test this framework,
we compared variational message passing as implemented by ForneyLab with
automatic differentiation variational inference (ADVI) and Monte Carlo methods
as implemented by state-of-the-art tools "Edward" and "Stan". In terms of
performance, extensibility and stability issues, ForneyLab appears to enjoy an
edge relative to its competitors for automated inference in state-space models.Comment: Accepted for publication in the International Journal of Approximate
Reasonin
Towards Query Logs for Privacy Studies: On Deriving Search Queries from Questions
Translating verbose information needs into crisp search queries is a
phenomenon that is ubiquitous but hardly understood. Insights into this process
could be valuable in several applications, including synthesizing large
privacy-friendly query logs from public Web sources which are readily available
to the academic research community. In this work, we take a step towards
understanding query formulation by tapping into the rich potential of community
question answering (CQA) forums. Specifically, we sample natural language (NL)
questions spanning diverse themes from the Stack Exchange platform, and conduct
a large-scale conversion experiment where crowdworkers submit search queries
they would use when looking for equivalent information. We provide a careful
analysis of this data, accounting for possible sources of bias during
conversion, along with insights into user-specific linguistic patterns and
search behaviors. We release a dataset of 7,000 question-query pairs from this
study to facilitate further research on query understanding.Comment: ECIR 2020 Short Pape
Efficient Learning and Evaluation of Complex Concepts in Inductive Logic Programming
Inductive Logic Programming (ILP) is a subfield of Machine Learning with foundations in logic
programming. In ILP, logic programming, a subset of first-order logic, is used as a uniform
representation language for the problem specification and induced theories. ILP has been
successfully applied to many real-world problems, especially in the biological domain (e.g. drug
design, protein structure prediction), where relational information is of particular importance.
The expressiveness of logic programs grants flexibility in specifying the learning task and understandability
to the induced theories. However, this flexibility comes at a high computational
cost, constraining the applicability of ILP systems. Constructing and evaluating complex concepts
remain two of the main issues that prevent ILP systems from tackling many learning
problems. These learning problems are interesting both from a research perspective, as they
raise the standards for ILP systems, and from an application perspective, where these target
concepts naturally occur in many real-world applications. Such complex concepts cannot
be constructed or evaluated by parallelizing existing top-down ILP systems or improving the
underlying Prolog engine. Novel search strategies and cover algorithms are needed.
The main focus of this thesis is on how to efficiently construct and evaluate complex hypotheses
in an ILP setting. In order to construct such hypotheses we investigate two approaches.
The first, the Top Directed Hypothesis Derivation framework, implemented in the ILP system
TopLog, involves the use of a top theory to constrain the hypothesis space. In the second approach
we revisit the bottom-up search strategy of Golem, lifting its restriction on determinate
clauses which had rendered Golem inapplicable to many key areas. These developments led to
the bottom-up ILP system ProGolem. A challenge that arises with a bottom-up approach is the
coverage computation of long, non-determinate, clauses. Prolog’s SLD-resolution is no longer
adequate. We developed a new, Prolog-based, theta-subsumption engine which is significantly
more efficient than SLD-resolution in computing the coverage of such complex clauses.
We provide evidence that ProGolem achieves the goal of learning complex concepts by presenting
a protein-hexose binding prediction application. The theory ProGolem induced has
a statistically significant better predictive accuracy than that of other learners. More importantly,
the biological insights ProGolem’s theory provided were judged by domain experts to
be relevant and, in some cases, novel
Lucene4IR: Developing information retrieval evaluation resources using Lucene
The workshop and hackathon on developing Information Retrieval Evaluation Resources using Lucene (L4IR) was held on the 8th and 9th of September, 2016 at the University of Strathclyde in Glasgow, UK and funded by the ESF Elias Network. The event featured three main elements: (i) a series of keynote and invited talks on industry, teaching and evaluation; (ii) planning, coding and hacking where a number of groups created modules and infrastructure to use Lucene to undertake TREC based evaluations; and (iii) a number of breakout groups discussing challenges, opportunities and problems in bridging the divide between academia and industry, and how we can use Lucene for teaching and learning Information Retrieval (IR). The event was composed of a mix and blend of academics, experts and students wanting to learn, share and create evaluation resources for the community. The hacking was intense and the discussions lively creating the basis of many useful tools but also raising numerous issues. It was clear that by adopting and contributing to most widely used and supported Open Source IR toolkit, there were many benefits for academics, students, researchers, developers and practitioners - providing a basis for stronger evaluation practices, increased reproducibility, more efficient knowledge transfer, greater collaboration between academia and industry, and shared teaching and training resources
- …