12 research outputs found

    The U.S. National Football League Scheduling Problem

    Get PDF
    Abstract We describe the problem of scheduling the television broadcasts of the U.S. National Football League (NFL). Unlike traditional round-robin tournament scheduling, the NFL problem involves assigning games to broadcast slots under various complex constraints while attempting to satisfy a set of user preferences. As well, a mixed-initiative functionality was required to allow the user to control and assist in the scheduling process. A prototype system was developed for the NFL which produced schedules satisfying many of these constraints and preferences. In this paper, we provide an overview of the constraint solving methodology employed and the implementation of the NFL prototype system

    Assessing hyper-heuristic performance

    Get PDF
    Limited attention has been paid to assessing the generality performance of hyper-heuristics. The performance of hyper-heuristics has been predominately assessed in terms of optimality which is not ideal as the aim of hyper-heuristics is not to be competitive with state of the art approaches but rather to raise the level of generality, i.e. the ability of a technique to produce good results for different problem instances or problems rather than the best results for some instances and poor results for others. Furthermore from existing literature in this area it is evident that different hyper-heuristics aim to achieve different levels of generality and need to be assessed as such. To cater for this the paper firstly presents a new taxonomy of four different levels of generality that can be attained by a hyper-heuristic based on a survey of the literature. The paper then proposes a performance measure to assess the performance of different types of hyper-heuristics at the four levels of generality in terms of generality rather than optimality. Three case studies from the literature are used to demonstrate the application of the generality performance measure. The paper concludes by examining how the generality measure can be combined with measures of other performance criteria, such as optimality, to assess hyper-heuristic performance on more than one criterion

    Symmetry in constraint programming

    Get PDF
    Constraint programming is an invaluable tool for solving many of the complex NP-complete problems that we need solutions to. These problems can be easily described as Constraint Satisfaction Problems (CSPs) and then passed to constraint solvers: complex pieces of software written to solve general CSPs efficiently. Many of the problems we need solutions to are real world problems: planning (e.g. vehicle routing), scheduling (e.g. job shop schedules) and timetabling problems (e.g. staff rotas) to name but a few. In the real world, we place structure on objects to make them easier to deal with. This manifests itself as symmetry. The symmetry in these real world problems make them easier to deal with for humans. However, they lead to a great deal of redundancy when using computational methods of problem solving. Thus, this thesis examines some of the many aspects of utilising the symmetry of CSPs to reduce the amount of computation needed by constraint solvers. In this thesis we look at the ease of use of previous symmetry breaking methods. We introduce a new and novel method of describing the symmetries of CSPs. We look at previous methods of symmetry breaking and show how we can drastically reduce their computation while still breaking all symmetry. We give the first detailed investigation into the behaviour of breaking only subsets of all symmetry. We look at how this affects the performance of constraint solvers before discovering the properties of a good symmetry. We then present an original method for choosing the best symmetries to use. Finally, we look at areas of redundant computation in constraint solvers that no other research has examined. New ways of dealing with this redundancy are proposed with results of an example implementation which improves efficiency by several orders of magnitude

    Task planning with uncertainty for robotic systems

    Get PDF
    In a practical robotic system, it is important to represent and plan sequences of operations and to be able to choose an efficient sequence from them for a specific task. During the generation and execution of task plans, different kinds of uncertainty may occur and erroneous states need to be handled to ensure the efficiency and reliability of the system. An approach to task representation, planning, and error recovery for robotic systems is demonstrated. Our approach to task planning is based on an AND/OR net representation, which is then mapped to a Petri net representation of all feasible geometric states and associated feasibility criteria for net transitions. Task decomposition of robotic assembly plans based on this representation is performed on the Petri net for robotic assembly tasks, and the inheritance of properties of liveness, safeness, and reversibility at all levels of decomposition are explored. This approach provides a framework for robust execution of tasks through the properties of traceability and viability. Uncertainty in robotic systems are modeled by local fuzzy variables, fuzzy marking variables, and global fuzzy variables which are incorporated in fuzzy Petri nets. Analysis of properties and reasoning about uncertainty are investigated using fuzzy reasoning structures built into the net. Two applications of fuzzy Petri nets, robot task sequence planning and sensor-based error recovery, are explored. In the first application, the search space for feasible and complete task sequences with correct precedence relationships is reduced via the use of global fuzzy variables in reasoning about subgoals. In the second application, sensory verification operations are modeled by mutually exclusive transitions to reason about local and global fuzzy variables on-line and automatically select a retry or an alternative error recovery sequence when errors occur. Task sequencing and task execution with error recovery capability for one and multiple soft components in robotic systems are investigated

    Long-term robot mapping in dynamic environments

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 139-144).One of the central goals in mobile robotics is to develop a mobile robot that can construct a map of an initially unknown dynamic environment. This is often referred to as the Simultaneous Localization and Mapping (SLAM) problem. A number of approaches to the SLAM problem have been successfully developed and applied, particularly to a mobile robot constructing a map of a 2D static indoor environment. While these methods work well for static environments, they are not robust to dynamic environments which are complex and composed of numerous objects that move at wide-varying time-scales, such as people or office furniture. The problem of maintaining a map of a dynamic environment is important for both real-world applications and for the advancement of robotics. A mobile robot executing extended missions, such as autonomously collecting data underwater for months or years, must be able to reliably know where it is, update its map as the environment changes, and recover from mistakes. From a fundamental perspective, this work is important in order to understand and determine the problems that occur with existing mapping techniques for persistent long-term operation. The primary contribution of the thesis is Dynamic Pose Graph SLAM (DPG-SLAM), a novel algorithm that addresses two core challenges of the long-term mapping problem. The first challenge is to ensure that the robot is able to remain localized in a changing environment over great lengths of time. The second challenge is to be able to maintain an up-to-date map over time in a computationally efficient manner. DPG-SLAM directly addresses both of these issues to enable long-term mobile robot navigation and map maintenance in changing environments. Using Kaess and Dellaert's incremental Smoothing and Mapping (iSAM) as the underlying SLAM state estimation engine, the dynamic pose graph evolves over time as the robot explores new areas and revisits previously mapped areas. The algorithm is demonstrated on two real-world dynamic indoor laser data sets, demonstrating the ability to maintain an efficient, up-to-date map despite long-term environmental changes. Future research issues, such as the integration of adaptive exploration with dynamic map maintenance, are identified.by Aisha Naima Walcott.Ph.D

    Supporting software processes for distributed software engineering teams

    Get PDF
    Software processes relate to the sequence of steps that must be carried out by humans to pursue the goals of software engineering. In order to have an accurate representation of what these steps actually are, software processes can be modelled using a process modeling language (PML). Some PMLs simply support the specification of the steps, while others enable the process to be executed (or enacted). When enacted, software processes can provide guidance, automation and enforcement of the software engineering practices that are embodied in the model. Although there has been much fruitful research into PMLs, their adoption by industry has not been widespread. While the reasons for this lack of success may be many and varied, this thesis identified two areas in which PMLs may have been deficient: human dimension issues in terms of support for awareness and visualisation; and support for addressing management and resource issues that might arise dynamically when a process model is being enacted. In order to address some of these issues, a new visual PML called Virtual Reality Process Modelling Language (VRPML) has been developed and evaluated. Novel features have been introduced in VRPML to include support for the integration of a virtual environment, and dynamic creation and assignment of tasks and resources at the PML enactment level. VRPML serves as a research vehicle for addressing our main research hypothesis that a PML, which exploits a virtual environment, is useful to support software processes for distributed software engineering teams.EThOS - Electronic Theses Online ServiceUniversiti Sains, MalaysiaGBUnited Kingdo

    Egalitarian Liberalism And Economic Freedom

    Get PDF
    This dissertation considers three major challenges to egalitarian liberal institutions made by classical liber- als: that egalitarian liberal institutions involve too much coercive interference with individual economic decisions, that free markets tend to do better at rewarding people on the basis of their economic choices, and that only by recognizing full liberal rights of ownership can a society best promote a stable property regime consistent with our pre-political conventions of ownership. Each of these objections fails, but they point to an underlying concern that egalitarian liberal institutions fail to adequately protect economic freedom. The dissertation then develops and defends a conception of economic freedom that is reflected in egalitar- ian liberal institutions. Economic freedom depends on the quality and availability of options individuals have in markets, especially the quality of exit options available to avoid coercive or exploitative conditions of employment or exchange. It then extends this view to the idea of property-owning democracy and worker-ownership of firms

    Sistemas Especialistas como ferramenta auxiliar para o ensino da disciplina Bases da Técnica Cirúrgica.

    Get PDF
    Apresenta-se neste trabalho a contribuição que os sistemas especialistas poderão oferecer aos estudantes de Medicina, em particular aos de Bases da Técnica Cirúrgica, no sentido de se ter uma ferramenta que permita: simulação de situações clínicas as mais diversas; auxilie no processo de tomada de decisões (diagnóstico e terapêutica); sirva para testar conhecimentos de forma interativa: o computador formula perguntas e o aluno as responde (o aluno verificará se o estado meta (solução de um problema) corresponde ao que ele idealizou). Como os fluxogramas de decisão correspondem a uma das formas de representação do conhecimento médico mais amplamente utilizadas, desenvolveu-se um software: ‘O Sistema Gerador de Regras’ (SGR), que permite, a partir do desenho destas estruturas sob forma de grafos valorados, fornecimento de todos os elementos, sobre o formato de um relatório, necessários para construção de sistemas especialistas (variáveis e regras de produção). Para ampliação da contribuição da engenharia do conhecimento na tarefa de elaboração de sistemas especialistas, confeccionou-se, também, uma base de conhecimento que possibilita o diagnóstico de distúrbios hidroeletrolíticos (tópico abordado na disciplina: Bases da Técnica Cirúrgica) que emprega outras formas de representação do conhecimento que não fluxogramas de decisão. Usando o SGR, os sistemas especialistas poderão ser montados por qualquer usuário, que tenha conhecimento de uso de computadores e interfaces gráficas (Windows), a partir de fluxogramas de decisão previamente elaborados, dispensando na maioria das vezes o engenheiro do conhecimento. Para a construção dos sistemas especialista empregou-se o “shell” Expert Sinta, desenvolvido pelo Laboratório de Inteligência Artificial (LIA) da Universidade Federal do Ceará.It comes in this work the contribution that the expert systems can offer to the medical students, in matter the one of Surgical Technique Bases, in the sense of having a tool that allows: simulation in most several clinical situations; to aid in the route of to take decisions (diagnosis and therapeutics); to test knowledge in an interactive way: the computer formulates questions and the student answers them (the student will be verified the state goal (solution of a problem) corresponds that he idealized). As decision’s flowcharts correspond one of the medical knowledge representation’s ways more thoroughly used, it grew a software: ‘Rules Generating System’ (SGR), that allows, starting from the drawing of these structures under form of valued graphs, supply all the elements, on the format of a report, necessary to construct expert systems (variables and production rules). To enlarge the contribution of the knowledge’s engineering in the task of elaborate expert systems, it was made, also, a knowledge base that makes possible diagnosis in hydro electrolytic disturbances (topic approached in the Bases of the Surgica Technique course) that uses other forms of representation of the knowledge that no decision’s flowcharts. Using SGR, the expert systems can be mounted for any user, that has knowledge in the use of computers and graphical interfaces (Windows), starting from decision’s flowcharts previously elaborated, releasing most of the time the knowledge’s engineer. For the expert system construction it was used the Expert Sinta shell, developed by the Laboratory of Artificial intelligence (LIA) of the Ceará Federal University

    Pacifying Leviathan: Back to basics in peace-building out of conflict

    No full text
    This thesis focuses critically on contemporary theory and practice of peace-building where there has been conflict. The commonality of the resumption of violence after peace processes in many recent examples, suggests that both theory and practice have not worked as intended. The thesis explores insights that might improve the odds that governing institutions (or, more particularly, the people who work in them) can put aside violence. In the terms used in this thesis: how might Leviathan be pacified? Therefore, the thesis deals with basics evident in all recorded (and probably pre-historic) human experience. For the modern states of Western Europe and North America, pacifying Leviathan followed centuries of conflict (including two world wars), interspersed with governance reforms and constitutional adjustments. The process is ongoing, but by the middle of the 20th century “the liberal state” clearly emerged, with features that included constitutions, the rule of law, the protection of human rights and the market system. There appeared to be a widespread view after World War II that the liberal state apparatus’ essence could be written down in documents, transplanted into many different historical and cultural contexts and would work much as the model predicted i.e. was easily reproducible, perhaps infinitely, even in smaller and smaller versions. From 1945 to 2010, the numbers of states at the United Nations almost quadrupled (51 to 192). Member 193 (South Sudan) may emerge from decades of conflict in 2011. In all that state formation, the optimistic view was that the new documents and institutions would provide structures within which political and/or ethnic competitors/combatants would engage in non-violent political competition. In this thesis, “reverse-engineering” is the term given to this notion. Such optimism was severely dented by the experiences of many newly-independent states in the mid-late 20th century. As violence escalated in new and existing states all over the world after the Cold War ended (taken, for convenience, as 1990), reverse-engineering remained at the core of the formula for peace-building after conflict. As with the post-colonial period, liberal peace-building since 1990 have also been repeated failures to work as intended, including the resumption of conflict. The most fragile states have posed the hardest problems, not only for the suffering citizens but for the international community seeking how best to help. With this in mind, and accepting that each state and society is unique, this thesis sets out building blocks for alternative approaches. It does not suggest there are simple answers in pacifying Leviathan, either generally or in relation to any particular example. If it is indeed possible in any place (e.g. Haiti) to reduce ongoing conflict, the argument is that these blocks should be amongst the foundations of theory to inform practice. The core thesis is thus that the chances of pacifying Leviathan might be significantly improved if domestic and international actors: • Adopt a conflict transformation approach to guide theory and practice; • Come to terms with groupism – how/why humans bond into groups and the potential this poses for violence and peace; • Understand the importance of receptivity - the notion that critical masses of key actors should squarely face (often when they have become exhausted by) the consequences of violent competitiveness and seek alternatives; • Translate receptivity into learned constitutionalism – learning to govern by rules amongst sufficient actors; and • Develop international assistance guided by the above perspectives, and which, with the consent of the peoples concerned, find ways to stay appropriately engaged for the time needed to strengthen the factors that should pacify Leviathan. The thesis does not focus on future strategies of conflict-reduction – such as economic development to give people stakes in the society, along with disarmament of combatants. Many other studies explore these. Here, the exploration is of the nature of human society, informed by history, examples, case studies and a sweep of cross-disciplinary analysis. Understanding why pacifying Leviathan is so hard is the basic first step, which forms the bulk of this thesis. Putting such understanding into practice involves many further steps. Important as these might be for current and future policy and practice in peace-building, their full development is beyond the scope of this thesis. Some suggestions are made, especially in the conclusion, but elaboration will have to await further work
    corecore