14 research outputs found

    Effective Solution of University Course Timetabling using Particle Swarm Optimizer based Hyper Heuristic approach

    Get PDF
    عادة ما تكون مشكلة الجدول الزمني للمحاضرات الجامعية (UCTP) هي مشكلة تحسين الإندماجية. يستغرق الأمر جهود يدوية لعدة أيام للوصول إلى جدول زمني مفيد ، ولا تزال النتائج غير جيدة بما يكفي. تُستخدم طرق مختلفة من (الإرشاد أو الإرشاد المساعد) لحل UCTP بشكل مناسب. لكن هذه الأساليب عادةً ما تعطي حلول محدودة. يعالج إطار العمل الاسترشادي العالي هذه المشكلة المعقدة بشكل مناسب. يقترح هذا البحث استخدام محسن سرب الجسيمات استنادا على منهجية الإرشاد العالي (HH PSO) لمعالجة مشكلة الجدول الزمني للمحاضرات الجامعية (UCTP) . محسن سرب الجسيمات PSO يستخدام كطريقة ذات مستوى عالي لتحديد تسلسل الاستدلال ذي المستوى المنخفض (LLH) والذي من ناحية أخرى يستطيع توليد الحل الأمثل. لنهج المقترح يقسم الحل إلى مرحلتين (المرحلة الأولية ومرحلة التحسين). قمنا بتطوير LLH جديد يسمى "أقل عدد ممكن من الغرف المتبقية"  لجدولة الأحداث. يتم استخدام مجموعتي بيانات مسابقة الجدول الزمني الدولية (ITC)  ITC 2002 و ITC 2007 لتقييم الطريقة المقترحة. تشير النتائج الأولية  إلى أن الإرشاد منخفض المستوى المقترح يساعد في جدولة الأحداث في المرحلة الأولية. بالمقارنة مع LLH الأخرى ، الطريقة LLH المقترحة جدولت المزيد من الأحداث لـ 14 و 15 من حالات البيانات من 24 و 20 حالة بيانات من ITC 2002 و ITC 2007 ، على التوالي. تظهر الدراسة التجريبية أن HH PSO تحصل على معدل خرق أقل للقيود في سبع وستة حالات بيانات من ITC 2007 و ITC 2002 ، على التوالي. واستنتج هذا البحث أن LLH المقترحة يمكن أن تحصل على حل معقول وملائم إذا تم تحديد الأولوياتThe university course timetable problem (UCTP) is typically a combinatorial optimization problem. Manually achieving a useful timetable requires many days of effort, and the results are still unsatisfactory. unsatisfactory. Various states of art methods (heuristic, meta-heuristic) are used to satisfactorily solve UCTP. However, these approaches typically represent the instance-specific solutions. The hyper-heuristic framework adequately addresses this complex problem. This research proposed Particle Swarm Optimizer-based Hyper Heuristic (HH PSO) to solve UCTP efficiently. PSO is used as a higher-level method that selects low-level heuristics (LLH) sequence which further generates an optimal solution. The proposed approach generates solutions into two phases (initial and improvement). A new LLH named “least possible rooms left” has been developed and proposed to schedule events. Both datasets of international timetabling competition (ITC) i.e., ITC 2002 and ITC 2007 are used to evaluate the proposed method. Experimental results indicate that the proposed low-level heuristic helps to schedule events at the initial stage. When compared with other LLH’s, the proposed LLH schedule more events for 14 and 15 data instances out of 24 and 20 data instances of ITC 2002 and ITC 2007, respectively. The experimental study shows that HH PSO gets a lower soft constraint violation rate on seven and six data instances of ITC 2007 and ITC 2002, respectively. This research has concluded the proposed LLH can get a feasible solution if prioritized

    An Extended Jump Functions Benchmark for the Analysis of Randomized Search Heuristics

    Full text link
    Jump functions are the {most-studied} non-unimodal benchmark in the theory of randomized search heuristics, in particular, evolutionary algorithms (EAs). They have significantly improved our understanding of how EAs escape from local optima. However, their particular structure -- to leave the local optimum one can only jump directly to the global optimum -- raises the question of how representative such results are. For this reason, we propose an extended class \textsc{Jump}_{k,\delta} of jump functions that contain a valley of low fitness of width δ\delta starting at distance kk from the global optimum. We prove that several previous results extend to this more general class: for all {kn1/3lnnk \le \frac{n^{1/3}}{\ln{n}}} and δ<k\delta < k, the optimal mutation rate for the (1+1)(1+1)~EA is δn\frac{\delta}{n}, and the fast (1+1)(1+1)~EA runs faster than the classical (1+1)(1+1)~EA by a factor super-exponential in δ\delta. However, we also observe that some known results do not generalize: the randomized local search algorithm with stagnation detection, which is faster than the fast (1+1)(1+1)~EA by a factor polynomial in kk on \textsc{Jump}_k, is slower by a factor polynomial in nn on some \textsc{Jump}_{k,\delta} instances. Computationally, the new class allows experiments with wider fitness valleys, especially when they lie further away from the global optimum.Comment: Extended version of a paper that appeared in the proceedings of GECCO 2021. To appear in Algorithmic

    Self-adjusting Population Sizes for Non-elitist Evolutionary Algorithms:Why Success Rates Matter

    Get PDF
    Evolutionary algorithms (EAs) are general-purpose optimisers that come with several parameters like the sizes of parent and offspring populations or the mutation rate. It is well known that the performance of EAs may depend drastically on these parameters. Recent theoretical studies have shown that self-adjusting parameter control mechanisms that tune parameters during the algorithm run can provably outperform the best static parameters in EAs on discrete problems. However, the majority of these studies concerned elitist EAs and we do not have a clear answer on whether the same mechanisms can be applied for non-elitist EAs. We study one of the best-known parameter control mechanisms, the one-fifth success rule, to control the offspring population size λ in the non-elitist (1, λ) EA. It is known that the (1, λ) EA has a sharp threshold with respect to the choice of λ where the expected runtime on the benchmark function OneMax changes from polynomial to exponential time. Hence, it is not clear whether parameter control mechanisms are able to find and maintain suitable values of λ. For OneMax we show that the answer crucially depends on the success rate s (i. e. a one-(s + 1)-th success rule). We prove that, if the success rate is appropriately small, the self-adjusting (1, λ) EA optimises OneMax in O(n) expected generations and O(n log n) expected evaluations, the best possible runtime for any unary unbiased black-box algorithm. A small success rate is crucial: we also show that if the success rate is too large, the algorithm has an exponential runtime on OneMax and other functions with similar characteristics

    Lazy Parameter Tuning and Control:Choosing All Parameters Randomly from a Power-Law Distribution

    Get PDF
    Most evolutionary algorithms have multiple parameters and their values drastically affect the performance. Due to the often complicated interplay of the parameters, setting these values right for a particular problem (parameter tuning) is a challenging task. This task becomes even more complicated when the optimal parameter values change significantly during the run of the algorithm since then a dynamic parameter choice (parameter control) is necessary. In this work, we propose a lazy but effective solution, namely choosing all parameter values (where this makes sense) in each iteration randomly from a suitably scaled power-law distribution. To demonstrate the effectiveness of this approach, we perform runtime analyses of the (1+(λ,λ))(1+(\lambda,\lambda)) genetic algorithm with all three parameters chosen in this manner. We show that this algorithm on the one hand can imitate simple hill-climbers like the (1+1)(1+1) EA, giving the same asymptotic runtime on problems like OneMax, LeadingOnes, or Minimum Spanning Tree. On the other hand, this algorithm is also very efficient on jump functions, where the best static parameters are very different from those necessary to optimize simple problems. We prove a performance guarantee that is comparable, sometimes even better, than the best performance known for static parameters. We complement our theoretical results with a rigorous empirical study confirming what the asymptotic runtime results suggest.Comment: Extended version of the paper accepted to GECCO 2021, including all the proofs omitted in the conference versio

    When hypermutations and ageing enable artificial immune systems to outperform evolutionary algorithms

    Get PDF
    We present a time complexity analysis of the Opt-IA artificial immune system (AIS). We first highlight the power and limitations of its distinguishing operators (i.e., hypermutations with mutation potential and ageing) by analysing them in isolation. Recent work has shown that ageing combined with local mutations can help escape local optima on a dynamic optimisation benchmark function. We generalise this result by rigorously proving that, compared to evolutionary algorithms (EAs), ageing leads to impressive speed-ups on the standard Image 1 benchmark function both when using local and global mutations. Unless the stop at first constructive mutation (FCM) mechanism is applied, we show that hypermutations require exponential expected runtime to optimise any function with a polynomial number of optima. If instead FCM is used, the expected runtime is at most a linear factor larger than the upper bound achieved for any random local search algorithm using the artificial fitness levels method. Nevertheless, we prove that algorithms using hypermutations can be considerably faster than EAs at escaping local optima. An analysis of the complete Opt-IA reveals that it is efficient on the previously considered functions and highlights problems where the use of the full algorithm is crucial. We complete the picture by presenting a class of functions for which Opt-IA fails with overwhelming probability while standard EAs are efficient
    corecore