168 research outputs found

    Using solution properties within an enumerative search to solve a sports league scheduling problem

    Get PDF
    This paper presents an enumerative approach for a particular sports league scheduling problem known as “Prob026” in CSPLib. Despite its exponential-time complexity, this simple method can solve all instances involving a number T of teams up to 50 in a reasonable amount of time while the best known tabu search and constraint programming algorithms are limited to T ⩽ 40 and the direct construction methods available only solve instances where ( T - 1 ) mod 3 ≠ 0 or T / 2 is odd. Furthermore, solutions were also found for some T values up to 70. The proposed approach relies on discovering, by observation, interesting properties from solutions of small problem instances and then using these properties in the final algorithm to constraint the search process

    A note on a sports league scheduling problem

    Get PDF
    Sports league scheduling is a difficult task in the general case. In this short note, we report two improvements to an existing enumerative search algorithm for a NP-hard sports league scheduling problem known as "prob026" in CSPLib. These improvements are based on additional rules to constraint and accelerate the enumeration process. The proposed approach is able to find a solution (schedule) for all prob026 instances for a number T of teams ranging from 12 to 70, including several T values for which a solution is reported for the first time.Comment: 9 page

    Solving Challenging Real-World Scheduling Problems

    Get PDF
    This work contains a series of studies on the optimization of three real-world scheduling problems, school timetabling, sports scheduling and staff scheduling. These challenging problems are solved to customer satisfaction using the proposed PEAST algorithm. The customer satisfaction refers to the fact that implementations of the algorithm are in industry use. The PEAST algorithm is a product of long-term research and development. The first version of it was introduced in 1998. This thesis is a result of a five-year development of the algorithm. One of the most valuable characteristics of the algorithm has proven to be the ability to solve a wide range of scheduling problems. It is likely that it can be tuned to tackle also a range of other combinatorial problems. The algorithm uses features from numerous different metaheuristics which is the main reason for its success. In addition, the implementation of the algorithm is fast enough for real-world use.Siirretty Doriast

    Global Optimization of the Maximum K-Cut Problem

    Get PDF
    RÉSUMÉ: Le problème de la k-coupe maximale (max-k-cut) est un problème de partitionnement de graphes qui est un des représentatifs de la classe des problèmes combinatoires NP-difficiles. Le max-kcut peut être utilisé dans de nombreuses applications industrielles. L’objectif de ce problème est de partitionner l’ensemble des sommets en k parties de telle façon que le poids total des arrêtes coupées soit maximisé. Les méthodes proposées dans la littérature pour résoudre le max-k-cut emploient, généralement, la programmation semidéfinie positive (SDP) associée. En comparaison avec les relaxations de la programmation linéaire (LP), les relaxations SDP sont plus fortes mais les temps de calcul sont plus élevés. Par conséquent, les méthodes basées sur la SDP ne peuvent pas résoudre de gros problèmes. Cette thèse introduit une méthode efficace de branchement et de résolution du problème max-k-cut en utilisant des relaxations SDP et LP renforcées. Cette thèse présente trois approches pour améliorer les solutions du max-k-cut. La première approche se concentre sur l’identification des classes d’inégalités les plus pertinentes des relaxations de max-k-cut. Cette approche consiste en une étude expérimentale de quatre classes d’inégalités de la littérature : clique, general clique, wheel et bicycle wheel. Afin d’inclure ces inégalités dans les formulations, nous utilisons un algorithme de plan coupant (CPA) pour ajouter seulement les inégalités les plus importantes . Ainsi, nous avons conçu plusieurs procédures de séparation pour trouver les violations. Les résultats suggèrent que les inégalités de wheel sont les plus fortes. De plus, l’inclusion de ces inégalités dans le max-k-cut peut améliorer la borne de la SDP de plus de 2%. La deuxième approche introduit les contraintes basées sur formulation SDP pour renforcer la relaxation LP. De plus, le CPA est amélioré en exploitant la technique de terminaison précoce d’une méthode de points intérieurs. Les résultats montrent que la relaxation LP avec les inégalités basées sur la SDP surpasse la relaxation SDP pour de nombreux cas, en particulier pour les instances avec un grand nombre de partitions (k � 7). La troisième approche étudie la méthode d’énumération implicite en se basant sur les résultats des dernières approches. On étudie quatre composantes de la méthode. Tout d’abord, nous présentons quatre méthodes heuristiques pour trouver des solutions réalisables : l’heuristique itérative d’agrégation, l’heuristique d’opérateur multiple, la recherche à voisinages variables, et la procédure de recherche aléatoire adaptative gloutonne. La deuxième procédure analyse les stratégies dichotomiques et polytomiques pour diviser un sous-problème. La troisième composante étudie cinq règles de branchement. Enfin, pour la sélection des noeuds de l’arbre de branchement, nous considérons les stratégies suivantes : meilleur d’abord, profondeur d’abord, et largeur d’abord. Pour chaque stratégie, nous fournissons des tests pour différentes valeurs de k. Les résultats montrent que la méthode exacte proposée est capable de trouver de nombreuses solutions. Chacune de ces trois approches a contribué à la conception d’une méthode efficace pour résoudre le problème du max-k-cut. De plus, les approches proposées peuvent être étendues pour résoudre des problèmes génériques d’optimisation en variables mixtes.----------ABSTRACT: In graph theory, the maximum k-cut (max-k-cut) problem is a representative problem of the class of NP-hard combinatorial optimization problems. It arises in many industrial applications and the objective of this problem is to partition vertices of a given graph into at most k partitions such that the total weight of the cut is maximized. The methods proposed in the literature to optimally solve the max-k-cut employ, usually, the associated semidefinite programming (SDP) relaxation in a branch-and-bound framework. In comparison with the linear programming (LP) relaxation, the SDP relaxation is stronger but it suffers from high CPU times. Therefore, methods based on SDP cannot solve large problems. This thesis introduces an efficient branch-and-bound method to solve the max-k-cut problem by using tightened SDP and LP relaxations. This thesis presents three approaches to improve the solutions of the problem. The first approach focuses on identifying relevant classes of inequalities to tighten the relaxations of the max-k-cut. This approach carries out an experimental study of four classes of inequalities from the literature: clique, general clique, wheel and bicycle wheel. In order to include these inequalities, we employ a cutting plane algorithm (CPA) to add only the most important inequalities in practice and we design several separation routines to find violations in a relaxed solution. Computational results suggest that the wheel inequalities are the strongest by far. Moreover, the inclusion of these inequalities in the max-k-cut improves the bound of the SDP formulation by more than 2%. The second approach introduces the SDP-based constraints to strengthen the LP relaxation. Moreover, the CPA is improved by exploiting the early-termination technique of an interior-point method. Computational results show that the LP relaxation with the SDP-based inequalities outperforms the SDP relaxations for many instances, especially for a large number of partitions (k � 7). The third approach investigates the branch-and-bound method using both previous approaches. Four components of the branch-and-bound are considered. First, four heuristic methods are presented to find a feasible solution: the iterative clustering heuristic, the multiple operator heuristic, the variable neighborhood search, and the greedy randomized adaptive search procedure. The second procedure analyzes the dichotomic and polytomic strategies to split a subproblem. The third feature studies five branching rules. Finally, for the node selection, we consider the following strategies: best-first search, depth-first search, and breadth-first search. For each component, we provide computational tests for different values of k. Computational results show that the proposed exact method is able to uncover many solutions. Each one of these three approaches contributed to the design of an efficient method to solve the max-k-cut problem. Moreover, the proposed approaches can be extended to solve generic mixinteger SDP problems

    Is illness important in professional soccer? An evaluation of incidence, risk factors and intervention.

    Get PDF
    Background - Previous research has demonstrated that illness is not a major problem within professional soccer. However, this research did not record illness where performance is restricted or medical attention is given, instead focussing only on illness where time is lost from soccer activities. Therefore, the aim of the present thesis was to establish the importance of illness in professional soccer by evaluating illness incidence, proposed risk factors and an illness prevention intervention. Methods - Illness incidence was recorded from 1 professional soccer team (59 different players) across 3 seasons (2016-17 - 2018-19), using a system that recorded all illness definitions and a questionnaire to quantify performance-restriction illness. Illnesses were confirmed via physician diagnosis. During the congested fixture period of the 2017-18 season, illness incidence was compared to a recreationally active comparator population from a university institution. Physical load data (via microelectromechanical system and heart rate monitoring) and subjective wellbeing data (via a 1-5 Likert scale assessing fatigue, sleep quality, general muscle soreness, stress, mood and sleep hours) was also collected across this time period. 7 and 28-day average values for physical load and subjective wellbeing variables, prior to illness events, were compared to averages (indicative of normality) across the same time periods, using a paired samples t-test. In the 2018-19 season an illness prevention intervention was developed and implemented across 4 months (November - February). Illness incidence in this season was compared to the 2 previous seasons using a repeated measures analysis of variance (RM-ANOVA). Outcome measures for intervention evaluation assessed the reasons behind intervention effectiveness. Results - Using 2 seasons worth of data, chapter 3 demonstrated that illness incidence was greater than training injury incidence (91 vs 17 incidences) and greater than values reported in previous research (91 vs 46 incidences). Illness incidence was also greater in the soccer team compared to the recreationally active comparator group (15 vs 10 incidences). Temporal patterns showed that peaks in illness incidence were distributed throughout the 2 seasons, not just in the winter months that coincide with congested fixture scheduling (10 incidences in July, 8 in September, 6 in October, 7 in November and 10 in January). Chapter 4 showed that, prior to illness events, there was an increase in 7-day average values for training impulse per minute (0.4±0.4 vs 0.6±0.5, p=<0.01) and time spent above 85% of maximum heart rate (2.3±1.8 vs 2.8±2.2, p=0.02) (markers of internal physical load), whilst maximum velocity was reduced (4.1±0.3 vs 3.7±1.0, p=0.03) (external load), compared to normality. In the 28 days preceding illness events there also appeared to be a reduction in sleep quality (3.8±0.3 vs 3.7±0.4, p=0.01) compared to normality. Chapter 5 indicates that the intervention did not reduce illness incidence in comparison to previous seasons. A RM-ANOVA determined that there were significant differences in 1 illness incidence variable between seasons (F (2, 11) = 17.581, p = 0.001). Post hoc comparisons showed an increased total illness incidence per 1000 hours in the 2017-18 season (20.2 ± 9.2) compared to the 2016-17 (7.1 ± 9.4, p = 0.004) and 2018-19 seasons (9.2 ± 7.5, p = 0.015). There were no other significant differences between seasons. Evaluation revealed that the intervention appeared to be successful in improving awareness of illness prevention, but did not alter aspects of behaviour. Conclusions - Illness does appear to be a problem within professional soccer. This has implications towards training and match availability, performance, team success and therefore club finances. Findings suggest that illness is related to physical load and other risk factors within this population. Further exploration of these factors within this environment is required. Changes in the identified markers physical load and subjective wellbeing may identify players who are at risk of illness and allow intervention where appropriate. The illness prevention intervention did not reduce illness in comparison to previous seasons. The limited impact may have been due to increased competition demands during the 2018-19 season, elevated illness reporting due to the intervention itself and a lack of focus on influencing behaviour. Illness surveillance and prevention should be a future focus within professional soccer

    A Polyhedral Study of Mixed 0-1 Set

    Get PDF
    We consider a variant of the well-known single node fixed charge network flow set with constant capacities. This set arises from the relaxation of more general mixed integer sets such as lot-sizing problems with multiple suppliers. We provide a complete polyhedral characterization of the convex hull of the given set

    A theory and its model to formulate business unit strategies within the knowledge economy context: nine textile -catalonian cases

    Get PDF
    Esta tesis debe entenderse dentro del contexto de la economía del conocimiento. En este sentido la misma refleja varios aspectos que contribuyen a la ventaja competitiva de la firma; éstos son:1) La misión y visión de las empresas investigadas. Aunque la mayor parte de las personas entrevistadas entienden dichos conceptos, éstos no se encuentran formalmente explicitados. 2) Ambas estrategias, la de operaciones e innovación, también son bien entendidas por las personas entrevistadas sin embargo, como el punto anterior, no están formalmente escritas pero en cambio si son brillantemente ejecutadas. 3) Las personas entrevistadas son totalmente capaces de identificar su ventaja competitiva y los activos intangibles más importantes que la convierten en sustentable.4) Por lo anteriormente explicado, las personas entrevistadas están conscientes del siguiente hecho: sus activos más valiosos son sus empleados; sin el conocimiento de sus colaboradores la empresa se encuentra a la deriva.5) Los puntos previamente enlistados enfatizan una idea fundamental: la confianza. Todos los entrevistados concuerdan que una parte importante del éxito en sus corporaciones es el capital social. Dicho activo se encuentra dentro, fuera o en ambas partes de la empresa. En otras palabras, el hombre por su naturaleza es un ser social el cual no puede vivir y prosperar sus cualidades por sí mismo, (ConcilioII, 1965) así pues, una vez que la confianza ha sido experimentada, los beneficios se materializan.6) Debido a los puntos anteriores, las personas entrevistadas concluyeron que si utilizasen la teoría propuesta y su correspondiente modelo para formular estrategias, su proceso de formulación de estrategias mejoraría cualitativamente.7) A pesar el punto anterior, las personas entrevistadas, conscientes del tamaño de su empresa, consideran que tanto el modelo como su teoría son demasiado grandes para ellos, sin embargo los constructos que componen dicha teoría y modelo tienen sentido.This thesis is understood within the context of the knowledge economy. In this sense this study reflects several matters that contribute to the firms' competitive advantage; these are:1) The mission and vision of the interviewed companies. Even though most people know what these issues are, they are not formally stated. 2) Both strategies, operations and innovation, are also well known to the interviewed people yet, as with the previous point, they are not formally stated but pretty well executed.3) The interviewed people are able to identify its competitive advantage and the main intangible assets that support it. 4) Because of the above, these persons are aware that their most valuable asset is that of their employees; without the knowledge inside the heads of their collaborators the company is lost. 5) The previous point signals trust. All the interviewed people acknowledge that an important part of their firms' success is social capital. This asset is inside, outside or on both sides of the firm. Put differently, man because of it's own nature is a social being that cannot live and prosper its qualities by itself (Concilio II, 1965) so once trust is fully exercised, benefits arrive.6) Because of the previous issues, the interviewed people believe that if they were utilising the proposed theory and its model to formulate strategies, their overall strategy formulation process will be enhanced.7) Despite point #6 the interviewed persons also acknowledge that, because of their company's size, the theory and its model are too big for them, but the constructs that build this theory appear sensible
    corecore