7,891 research outputs found
Recommended from our members
Automated generation of computationally hard feature models using evolutionary algorithms
This is the post-print version of the final paper published in Expert Systems with Applications. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2014 Elsevier B.V.A feature model is a compact representation of the products of a software product line. The automated extraction of information from feature models is a thriving topic involving numerous analysis operations, techniques and tools. Performance evaluations in this domain mainly rely on the use of random feature models. However, these only provide a rough idea of the behaviour of the tools with average problems and are not sufficient to reveal their real strengths and weaknesses. In this article, we propose to model the problem of finding computationally hard feature models as an optimization problem and we solve it using a novel evolutionary algorithm for optimized feature models (ETHOM). Given a tool and an analysis operation, ETHOM generates input models of a predefined size maximizing aspects such as the execution time or the memory consumption of the tool when performing the operation over the model. This allows users and developers to know the performance of tools in pessimistic cases providing a better idea of their real power and revealing performance bugs. Experiments using ETHOM on a number of analyses and tools have successfully identified models producing much longer executions times and higher memory consumption than those obtained with random models of identical or even larger size.European Commission (FEDER), the Spanish Government and
the Andalusian Government
Automated sequence and motion planning for robotic spatial extrusion of 3D trusses
While robotic spatial extrusion has demonstrated a new and efficient means to
fabricate 3D truss structures in architectural scale, a major challenge remains
in automatically planning extrusion sequence and robotic motion for trusses
with unconstrained topologies. This paper presents the first attempt in the
field to rigorously formulate the extrusion sequence and motion planning (SAMP)
problem, using a CSP encoding. Furthermore, this research proposes a new
hierarchical planning framework to solve the extrusion SAMP problems that
usually have a long planning horizon and 3D configuration complexity. By
decoupling sequence and motion planning, the planning framework is able to
efficiently solve the extrusion sequence, end-effector poses, joint
configurations, and transition trajectories for spatial trusses with
nonstandard topologies. This paper also presents the first detailed computation
data to reveal the runtime bottleneck on solving SAMP problems, which provides
insight and comparing baseline for future algorithmic development. Together
with the algorithmic results, this paper also presents an open-source and
modularized software implementation called Choreo that is machine-agnostic. To
demonstrate the power of this algorithmic framework, three case studies,
including real fabrication and simulation results, are presented.Comment: 24 pages, 16 figure
AlGaAs heterojunction lasers
The characterization of 8300 A lasers was broadened, especially in the area of beam quality. Modulation rates up to 2 Gbit/sec at output powers of 20 mW were observed, waveform fidelity was fully adequate for low BER data transmission, and wavefront measurements showed that phase aberrations were less than lamda/50. Also, individually addressable arrays of up to ten contiguous diode lasers were fabricated and tested. Each laser operates at powers up to 30 mW CW in single spatial mode. Shifting the operating wavelength of the basic CSP laser from 8300 A to 8650 A was accomplished by the addition of Si to the active region. Output power has reached 100 mW single mode, with excellent far field wave front properties. Operating life is currently approx. 1000 hrs at 35 mW CW. In addition, laser reliability, for operation at both 8300 A and 8650 A, has profited significantly from several developments in the processing procedures
Mapping constrained optimization problems to quantum annealing with application to fault diagnosis
Current quantum annealing (QA) hardware suffers from practical limitations
such as finite temperature, sparse connectivity, small qubit numbers, and
control error. We propose new algorithms for mapping boolean constraint
satisfaction problems (CSPs) onto QA hardware mitigating these limitations. In
particular we develop a new embedding algorithm for mapping a CSP onto a
hardware Ising model with a fixed sparse set of interactions, and propose two
new decomposition algorithms for solving problems too large to map directly
into hardware.
The mapping technique is locally-structured, as hardware compatible Ising
models are generated for each problem constraint, and variables appearing in
different constraints are chained together using ferromagnetic couplings. In
contrast, global embedding techniques generate a hardware independent Ising
model for all the constraints, and then use a minor-embedding algorithm to
generate a hardware compatible Ising model. We give an example of a class of
CSPs for which the scaling performance of D-Wave's QA hardware using the local
mapping technique is significantly better than global embedding.
We validate the approach by applying D-Wave's hardware to circuit-based
fault-diagnosis. For circuits that embed directly, we find that the hardware is
typically able to find all solutions from a min-fault diagnosis set of size N
using 1000N samples, using an annealing rate that is 25 times faster than a
leading SAT-based sampling method. Further, we apply decomposition algorithms
to find min-cardinality faults for circuits that are up to 5 times larger than
can be solved directly on current hardware.Comment: 22 pages, 4 figure
Engineering Crowdsourced Stream Processing Systems
A crowdsourced stream processing system (CSP) is a system that incorporates
crowdsourced tasks in the processing of a data stream. This can be seen as
enabling crowdsourcing work to be applied on a sample of large-scale data at
high speed, or equivalently, enabling stream processing to employ human
intelligence. It also leads to a substantial expansion of the capabilities of
data processing systems. Engineering a CSP system requires the combination of
human and machine computation elements. From a general systems theory
perspective, this means taking into account inherited as well as emerging
properties from both these elements. In this paper, we position CSP systems
within a broader taxonomy, outline a series of design principles and evaluation
metrics, present an extensible framework for their design, and describe several
design patterns. We showcase the capabilities of CSP systems by performing a
case study that applies our proposed framework to the design and analysis of a
real system (AIDR) that classifies social media messages during time-critical
crisis events. Results show that compared to a pure stream processing system,
AIDR can achieve a higher data classification accuracy, while compared to a
pure crowdsourcing solution, the system makes better use of human workers by
requiring much less manual work effort
Towards Symbolic Model-Based Mutation Testing: Combining Reachability and Refinement Checking
Model-based mutation testing uses altered test models to derive test cases
that are able to reveal whether a modelled fault has been implemented. This
requires conformance checking between the original and the mutated model. This
paper presents an approach for symbolic conformance checking of action systems,
which are well-suited to specify reactive systems. We also consider
nondeterminism in our models. Hence, we do not check for equivalence, but for
refinement. We encode the transition relation as well as the conformance
relation as a constraint satisfaction problem and use a constraint solver in
our reachability and refinement checking algorithms. Explicit conformance
checking techniques often face state space explosion. First experimental
evaluations show that our approach has potential to outperform explicit
conformance checkers.Comment: In Proceedings MBT 2012, arXiv:1202.582
Shadow Price Guided Genetic Algorithms
The Genetic Algorithm (GA) is a popular global search algorithm. Although it has been used successfully in many fields, there are still performance challenges that prevent GA’s further success. The performance challenges include: difficult to reach optimal solutions for complex problems and take a very long time to solve difficult problems. This dissertation is to research new ways to improve GA’s performance on solution quality and convergence speed. The main focus is to present the concept of shadow price and propose a two-measurement GA. The new algorithm uses the fitness value to measure solutions and shadow price to evaluate components. New shadow price Guided operators are used to achieve good measurable evolutions. Simulation results have shown that the new shadow price Guided genetic algorithm (SGA) is effective in terms of performance and efficient in terms of speed
Developing a distributed electronic health-record store for India
The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India
- …