184 research outputs found
Squeaky Wheel Optimization
We describe a general approach to optimization which we term `Squeaky Wheel'
Optimization (SWO). In SWO, a greedy algorithm is used to construct a solution
which is then analyzed to find the trouble spots, i.e., those elements, that,
if improved, are likely to improve the objective function score. The results of
the analysis are used to generate new priorities that determine the order in
which the greedy algorithm constructs the next solution. This
Construct/Analyze/Prioritize cycle continues until some limit is reached, or an
acceptable solution is found. SWO can be viewed as operating on two search
spaces: solutions and prioritizations. Successive solutions are only indirectly
related, via the re-prioritization that results from analyzing the prior
solution. Similarly, successive prioritizations are generated by constructing
and analyzing solutions. This `coupled search' has some interesting properties,
which we discuss. We report encouraging experimental results on two domains,
scheduling problems that arise in fiber-optic cable manufacturing, and graph
coloring problems. The fact that these domains are very different supports our
claim that SWO is a general technique for optimization
Evolutionary squeaky wheel optimization: a new framework for analysis
Squeaky wheel optimization (SWO) is a relatively new metaheuristic that has been shown to be effective for many real-world problems. At each iteration SWO does a complete construction of a solution starting from the empty assignment. Although the construction uses information from previous iterations, the complete rebuilding does mean that SWO is generally effective at diversification but can suffer from a relatively weak intensification. Evolutionary SWO (ESWO) is a recent extension to SWO that is designed to improve the intensification by keeping the good components of solutions and only using SWO to reconstruct other poorer components of the solution. In such algorithms a standard challenge is to understand how the various parameters affect the search process. In order to support the future study of such issues, we propose a formal framework for the analysis of ESWO. The framework is based on Markov chains, and the main novelty arises because ESWO moves through the space of partial assignments. This makes it significantly different from the analyses used in local search (such as simulated annealing) which only move through complete assignments. Generally, the exact details of ESWO will depend on various heuristics; so we focus our approach on a case of ESWO that we call ESWO-II and that has probabilistic as opposed to heuristic selection and construction operators. For ESWO-II, we study a simple problem instance and explicitly compute the stationary distribution probability over the states of the search space. We find interesting properties of the distribution. In particular, we find that the probabilities of states generally, but not always, increase with their fitness. This nonmonotonocity is quite different from the monotonicity expected in algorithms such as simulated annealing
Understanding Algorithm Performance on an Oversubscribed Scheduling Application
The best performing algorithms for a particular oversubscribed scheduling
application, Air Force Satellite Control Network (AFSCN) scheduling, appear to
have little in common. Yet, through careful experimentation and modeling of
performance in real problem instances, we can relate characteristics of the
best algorithms to characteristics of the application. In particular, we find
that plateaus dominate the search spaces (thus favoring algorithms that make
larger changes to solutions) and that some randomization in exploration is
critical to good performance (due to the lack of gradient information on the
plateaus). Based on our explanations of algorithm performance, we develop a new
algorithm that combines characteristics of the best performers; the new
algorithms performance is better than the previous best. We show how hypothesis
driven experimentation and search modeling can both explain algorithm
performance and motivate the design of a new algorithm
Framework for sustainable TVET-Teacher Education Program in Malaysia Public Universities
Studies had stated that less attention was given to the education aspect, such as
teaching and learning in planning for improving the TVET system. Due to the 21st
Century context, the current paradigm of teaching for the TVET educators also has
been reported to be fatal and need to be shifted. All these disadvantages reported
hindering the country from achieving the 5th strategy in the Strategic Plan for
Vocational Education Transformation to transform TVET system as a whole.
Therefore, this study aims to develop a framework for sustainable TVET Teacher
Education program in Malaysia. This study had adopted an Exploratory Sequential
Mix-Method design, which involves a semi-structured interview (phase one) and
survey method (phase two). Nine experts had involved in phase one chosen by using
Purposive Sampling Technique. As in phase two, 118 TVET-TE program lecturers
were selected as the survey sample chosen through random sampling method. After
data analysis in phase one (thematic analysis) and phase two (Principal Component
Analysis), eight domains and 22 elements have been identified for the framework for
sustainable TVET-TE program in Malaysia. This framework was identified to embed
the elements of 21st Century Education, thus filling the gap in this research. The
research findings also indicate that the developed framework was unidimensional and
valid for the development and research regarding TVET-TE program in Malaysia.
Lastly, it is in the hope that this research can be a guide for the nations in producing a
quality TVET teacher in the future
GIB: Imperfect Information in a Computationally Challenging Game
This paper investigates the problems arising in the construction of a program
to play the game of contract bridge. These problems include both the difficulty
of solving the game's perfect information variant, and techniques needed to
address the fact that bridge is not, in fact, a perfect information game. GIB,
the program being described, involves five separate technical advances:
partition search, the practical application of Monte Carlo techniques to
realistic problems, a focus on achievable sets to solve problems inherent in
the Monte Carlo approach, an extension of alpha-beta pruning from total orders
to arbitrary distributive lattices, and the use of squeaky wheel optimization
to find approximately optimal solutions to cardplay problems. GIB is currently
believed to be of approximately expert caliber, and is currently the strongest
computer bridge program in the world
Automatic Scheduling for a Ground Segment as a Service Platform Dedicated to Small Satellites
Together with the development of nano, micro, and small satellite missions and constellations, the necessity for efficient and tailored ground segments is raising. The peculiarities of the market together with the technological developments of the recent years have led to the idea of ground segment as a service. To meet these needs Leaf Space introduced Leaf Line. An essential part of such service consists of scheduling contact windows over the worldwide-deployed network of ground stations. This is an NP-hard problem, which is often solved with methods belonging to the class of operational research. Generally, the orbits of small satellites are very low, characterized by short-timed contact windows. This condition leads to needs way different from those associated to long-lived high-orbit satellites, which most of the literature on scheduling algorithms for telecommunication systems is focused on. Furthermore, a service dedicated to SMEs and NewSpace startups brings additional challenges linked to customer needs. These peculiarities require the development of new, tailored, scheduling algorithms. In the proposed strategy it is assumed to have no information about the state of the satellite (stored data and available energy), and that start and end of contact windows are fixed. In this work, the scheduling is treated as a highly constrained combinatorial optimization problem; various approaches are described and then compared. Such algorithms are iterative, and they all leverage the structure of the problem; specifically, many efforts are made to appropriately reduce the search space. Although optimality cannot be guaranteed, good solutions that are reasonably close to optimal can be obtained. It is found that depending on the problem settings, different algorithms can stand out as the best ones. This paper presents the work done on the scheduling library that is currently powering the Leaf Line network: this platform is offering an easy-to-use, cloud-based and high-availability ground segment service for small satellites operators
Multiple knapsack problem with inter-related items and its applications to real world problems
Master'sMASTER OF SCIENC
- …