46,925 research outputs found
Limited Evaluation Cooperative Co-evolutionary Differential Evolution for Large-scale Neuroevolution
Many real-world control and classification tasks involve a large number of
features. When artificial neural networks (ANNs) are used for modeling these
tasks, the network architectures tend to be large. Neuroevolution is an
effective approach for optimizing ANNs; however, there are two bottlenecks that
make their application challenging in case of high-dimensional networks using
direct encoding. First, classic evolutionary algorithms tend not to scale well
for searching large parameter spaces; second, the network evaluation over a
large number of training instances is in general time-consuming. In this work,
we propose an approach called the Limited Evaluation Cooperative
Co-evolutionary Differential Evolution algorithm (LECCDE) to optimize
high-dimensional ANNs.
The proposed method aims to optimize the pre-synaptic weights of each
post-synaptic neuron in different subpopulations using a Cooperative
Co-evolutionary Differential Evolution algorithm, and employs a limited
evaluation scheme where fitness evaluation is performed on a relatively small
number of training instances based on fitness inheritance. We test LECCDE on
three datasets with various sizes, and our results show that cooperative
co-evolution significantly improves the test error comparing to standard
Differential Evolution, while the limited evaluation scheme facilitates a
significant reduction in computing time
Target Directed Event Sequence Generation for Android Applications
Testing is a commonly used approach to ensure the quality of software, of
which model-based testing is a hot topic to test GUI programs such as Android
applications (apps). Existing approaches mainly either dynamically construct a
model that only contains the GUI information, or build a model in the view of
code that may fail to describe the changes of GUI widgets during runtime.
Besides, most of these models do not support back stack that is a particular
mechanism of Android. Therefore, this paper proposes a model LATTE that is
constructed dynamically with consideration of the view information in the
widgets as well as the back stack, to describe the transition between GUI
widgets. We also propose a label set to link the elements of the LATTE model to
program snippets. The user can define a subset of the label set as a target for
the testing requirements that need to cover some specific parts of the code. To
avoid the state explosion problem during model construction, we introduce a
definition "state similarity" to balance the model accuracy and analysis cost.
Based on this model, a target directed test generation method is presented to
generate event sequences to effectively cover the target. The experiments on
several real-world apps indicate that the generated test cases based on LATTE
can reach a high coverage, and with the model we can generate the event
sequences to cover a given target with short event sequences
Automatic Differentiation of Rigid Body Dynamics for Optimal Control and Estimation
Many algorithms for control, optimization and estimation in robotics depend
on derivatives of the underlying system dynamics, e.g. to compute
linearizations, sensitivities or gradient directions. However, we show that
when dealing with Rigid Body Dynamics, these derivatives are difficult to
derive analytically and to implement efficiently. To overcome this issue, we
extend the modelling tool `RobCoGen' to be compatible with Automatic
Differentiation. Additionally, we propose how to automatically obtain the
derivatives and generate highly efficient source code. We highlight the
flexibility and performance of the approach in two application examples. First,
we show a Trajectory Optimization example for the quadrupedal robot HyQ, which
employs auto-differentiation on the dynamics including a contact model. Second,
we present a hardware experiment in which a 6 DoF robotic arm avoids a randomly
moving obstacle in a go-to task by fast, dynamic replanning
Automated Test Input Generation for Android: Are We There Yet?
Mobile applications, often simply called "apps", are increasingly widespread,
and we use them daily to perform a number of activities. Like all software,
apps must be adequately tested to gain confidence that they behave correctly.
Therefore, in recent years, researchers and practitioners alike have begun to
investigate ways to automate apps testing. In particular, because of Android's
open source nature and its large share of the market, a great deal of research
has been performed on input generation techniques for apps that run on the
Android operating systems. At this point in time, there are in fact a number of
such techniques in the literature, which differ in the way they generate
inputs, the strategy they use to explore the behavior of the app under test,
and the specific heuristics they use. To better understand the strengths and
weaknesses of these existing approaches, and get general insight on ways they
could be made more effective, in this paper we perform a thorough comparison of
the main existing test input generation tools for Android. In our comparison,
we evaluate the effectiveness of these tools, and their corresponding
techniques, according to four metrics: code coverage, ability to detect faults,
ability to work on multiple platforms, and ease of use. Our results provide a
clear picture of the state of the art in input generation for Android apps and
identify future research directions that, if suitably investigated, could lead
to more effective and efficient testing tools for Android
Task Runtime Prediction in Scientific Workflows Using an Online Incremental Learning Approach
Many algorithms in workflow scheduling and resource provisioning rely on the
performance estimation of tasks to produce a scheduling plan. A profiler that
is capable of modeling the execution of tasks and predicting their runtime
accurately, therefore, becomes an essential part of any Workflow Management
System (WMS). With the emergence of multi-tenant Workflow as a Service (WaaS)
platforms that use clouds for deploying scientific workflows, task runtime
prediction becomes more challenging because it requires the processing of a
significant amount of data in a near real-time scenario while dealing with the
performance variability of cloud resources. Hence, relying on methods such as
profiling tasks' execution data using basic statistical description (e.g.,
mean, standard deviation) or batch offline regression techniques to estimate
the runtime may not be suitable for such environments. In this paper, we
propose an online incremental learning approach to predict the runtime of tasks
in scientific workflows in clouds. To improve the performance of the
predictions, we harness fine-grained resources monitoring data in the form of
time-series records of CPU utilization, memory usage, and I/O activities that
are reflecting the unique characteristics of a task's execution. We compare our
solution to a state-of-the-art approach that exploits the resources monitoring
data based on regression machine learning technique. From our experiments, the
proposed strategy improves the performance, in terms of the error, up to
29.89%, compared to the state-of-the-art solutions.Comment: Accepted for presentation at main conference track of 11th IEEE/ACM
International Conference on Utility and Cloud Computin
IntRepair: Informed Repairing of Integer Overflows
Integer overflows have threatened software applications for decades. Thus, in
this paper, we propose a novel technique to provide automatic repairs of
integer overflows in C source code. Our technique, based on static symbolic
execution, fuses detection, repair generation and validation. This technique is
implemented in a prototype named IntRepair. We applied IntRepair to 2,052C
programs (approx. 1 million lines of code) contained in SAMATE's Juliet test
suite and 50 synthesized programs that range up to 20KLOC. Our experimental
results show that IntRepair is able to effectively detect integer overflows and
successfully repair them, while only increasing the source code (LOC) and
binary (Kb) size by around 1%, respectively. Further, we present the results of
a user study with 30 participants which shows that IntRepair repairs are more
than 10x efficient as compared to manually generated code repairsComment: Accepted for publication at the IEEE TSE journal. arXiv admin note:
text overlap with arXiv:1710.0372
Automatic Software Repair: a Bibliography
This article presents a survey on automatic software repair. Automatic
software repair consists of automatically finding a solution to software bugs
without human intervention. This article considers all kinds of repairs. First,
it discusses behavioral repair where test suites, contracts, models, and
crashing inputs are taken as oracle. Second, it discusses state repair, also
known as runtime repair or runtime recovery, with techniques such as checkpoint
and restart, reconfiguration, and invariant restoration. The uniqueness of this
article is that it spans the research communities that contribute to this body
of knowledge: software engineering, dependability, operating systems,
programming languages, and security. It provides a novel and structured
overview of the diversity of bug oracles and repair operators used in the
literature
Portfolio Methods for Optimal Planning: an Empirical Analysis
Combining the complementary strengths of several algorithms through portfolio approaches has been demonstrated to be effective in solving a wide range of AI problems. Notably, portfolio techniques have been prominently applied to suboptimal (satisficing) AI planning. Here, we consider the construction of sequential planner portfolios for (domain- independent) optimal planning. Specifically, we introduce four techniques (three of which are dynamic) for per-instance planner schedule generation using problem instance features, and investigate the usefulness of a range of static and dynamic techniques for combining planners. Our extensive experimental analysis demonstrates the benefits of using static and dynamic sequential portfolios for optimal planning, and provides insights on the most suitable conditions for their fruitful exploitation
A comprehensive evaluation of alignment algorithms in the context of RNA-seq.
Transcriptome sequencing (RNA-Seq) overcomes limitations of previously used RNA quantification methods and provides one experimental framework for both high-throughput characterization and quantification of transcripts at the nucleotide level. The first step and a major challenge in the analysis of such experiments is the mapping of sequencing reads to a transcriptomic origin including the identification of splicing events. In recent years, a large number of such mapping algorithms have been developed, all of which have in common that they require algorithms for aligning a vast number of reads to genomic or transcriptomic sequences. Although the FM-index based aligner Bowtie has become a de facto standard within mapping pipelines, a much larger number of possible alignment algorithms have been developed also including other variants of FM-index based aligners. Accordingly, developers and users of RNA-seq mapping pipelines have the choice among a large number of available alignment algorithms. To provide guidance in the choice of alignment algorithms for these purposes, we evaluated the performance of 14 widely used alignment programs from three different algorithmic classes: algorithms using either hashing of the reference transcriptome, hashing of reads, or a compressed FM-index representation of the genome. Here, special emphasis was placed on both precision and recall and the performance for different read lengths and numbers of mismatches and indels in a read. Our results clearly showed the significant reduction in memory footprint and runtime provided by FM-index based aligners at a precision and recall comparable to the best hash table based aligners. Furthermore, the recently developed Bowtie 2 alignment algorithm shows a remarkable tolerance to both sequencing errors and indels, thus, essentially making hash-based aligners obsolete
- …