378 research outputs found

    Iterated local search using an add and delete hyper- heuristic for university course timetabling

    Get PDF
    Hyper-heuristics are (meta-)heuristics that operate at a higher level to choose or generate a set of low-level (meta-)heuristics in an attempt of solve difficult optimization problems. Iterated local search (ILS) is a well-known approach for discrete optimization, combining perturbation and hill-climbing within an iterative framework. In this study, we introduce an ILS approach, strengthened by a hyper-heuristic which generates heuristics based on a fixed number of add and delete operations. The performance of the proposed hyper-heuristic is tested across two different problem domains using real world benchmark of course timetabling instances from the second International Timetabling Competition Tracks 2 and 3. The results show that mixing add and delete operations within an ILS framework yields an effective hyper-heuristic approach

    Shared Risk Link Group (SRLG)-Diverse Path Provisioning Under Hybrid Service Level Agreements in Wavelength-Routed Optical Mesh Networks

    Get PDF
    The static provisioning problem in wavelength-routed optical networks has been studied for many years. However, service providers are still facing the challenges arising from the special requirements for provisioning services at the optical layer. In this paper, we incorporate some realistic constraints into the static provisioning problem, and formulate it under different network resource availability conditions.We consider three classes of shared risk link group (SRLG)-diverse path protection schemes: dedicated, shared, and unprotected. We associate with each connection request a lightpath length constraint and a revenue value. When the network resources are not sufficient to accommodate all the connection requests, the static provisioning problem is formulated as a revenue maximization problem, whose objective is maximizing the total revenue value. When the network has sufficient resources, the problem becomes a capacity minimization problem with the objective of minimizing the number of used wavelength-links. We provide integer linear programming (ILP) formulations for these problems. Because solving these ILP problems is extremely time consuming, we propose a tabu search heuristic to solve these problems within a reasonable amount of time. We also develop a rerouting optimization heuristic, which is based on previous work. Experimental results are presented to compare the solutions obtained by the tabu search heuristic and the rerouting optimization heuristic. For both problems, the tabu search heuristic outperforms the rerouting optimization heuristic

    Knowledge management overview of feature selection problem in high-dimensional financial data: Cooperative co-evolution and Map Reduce perspectives

    Get PDF
    The term big data characterizes the massive amounts of data generation by the advanced technologies in different domains using 4Vs volume, velocity, variety, and veracity-to indicate the amount of data that can only be processed via computationally intensive analysis, the speed of their creation, the different types of data, and their accuracy. High-dimensional financial data, such as time-series and space-Time data, contain a large number of features (variables) while having a small number of samples, which are used to measure various real-Time business situations for financial organizations. Such datasets are normally noisy, and complex correlations may exist between their features, and many domains, including financial, lack the al analytic tools to mine the data for knowledge discovery because of the high-dimensionality. Feature selection is an optimization problem to find a minimal subset of relevant features that maximizes the classification accuracy and reduces the computations. Traditional statistical-based feature selection approaches are not adequate to deal with the curse of dimensionality associated with big data. Cooperative co-evolution, a meta-heuristic algorithm and a divide-And-conquer approach, decomposes high-dimensional problems into smaller sub-problems. Further, MapReduce, a programming model, offers a ready-To-use distributed, scalable, and fault-Tolerant infrastructure for parallelizing the developed algorithm. This article presents a knowledge management overview of evolutionary feature selection approaches, state-of-The-Art cooperative co-evolution and MapReduce-based feature selection techniques, and future research directions
    • …
    corecore