15 research outputs found

    Comparison of Sudoku Solving Skills of Preschool Children Enrolled in the Montessori Approach and the National Education Programs Yıldız Güven1, Cihat Gültekin1, A. Beyzanur Dedeoğlu1

    Get PDF
    According to Johnson-Laird (2010), sudoku, a mind game, is based on a pure deduction and reasoning processes. This study analyzed sudoku solving skills of preschool children and to ascertain whether there was a difference between children who were educated according to the Ministry of Education preschool education program and the Montessori approach. Sudoku skills of children were analyzed by gender, age, duration of preschool attendance, mother’s and father’s education level and previous experience of playing sudoku using a 12-question Sudoku Skills Measurement Tool developed for this research study.The study sample of the study consisted of 118 children (57 girls, 61 boys) aged between 54-77 months. The findings showed that there was no significant difference in sudoku skills by gender. However, sudoku skills varied with age (54-65 months and 66-77 months) in favor of older groups. Children's sudoku skills were more developed with an increase in education level of either parent. Children who had been in preschool for longer had higher sudoku scores. A previous experience of playing sudoku did not impact sudoku scores. Sudoku skills of children who were educated according to the Montessori program were more developed compared to those of children educated according to Ministry of National Education program

    Using Small MUSes to Explain How to Solve Pen and Paper Puzzles

    Full text link
    In this paper, we present Demystify, a general tool for creating human-interpretable step-by-step explanations of how to solve a wide range of pen and paper puzzles from a high-level logical description. Demystify is based on Minimal Unsatisfiable Subsets (MUSes), which allow Demystify to solve puzzles as a series of logical deductions by identifying which parts of the puzzle are required to progress. This paper makes three contributions over previous work. First, we provide a generic input language, based on the Essence constraint language, which allows us to easily use MUSes to solve a much wider range of pen and paper puzzles. Second, we demonstrate that the explanations that Demystify produces match those provided by humans by comparing our results with those provided independently by puzzle experts on a range of puzzles. We compare Demystify to published guides for solving a range of different pen and paper puzzles and show that by using MUSes, Demystify produces solving strategies which closely match human-produced guides to solving those same puzzles (on average 89% of the time). Finally, we introduce a new randomised algorithm to find MUSes for more difficult puzzles. This algorithm is focused on optimised search for individual small MUSes

    Hybrid meta-heuristics for combinatorial optimization

    Get PDF
    Combinatorial optimization problems arise, in many forms, in vari- ous aspects of everyday life. Nowadays, a lot of services are driven by optimization algorithms, enabling us to make the best use of the available resources while guaranteeing a level of service. Ex- amples of such services are public transportation, goods delivery, university time-tabling, and patient scheduling. Thanks also to the open data movement, a lot of usage data about public and private services is accessible today, sometimes in aggregate form, to everyone. Examples of such data are traffic information (Google), bike sharing systems usage (CitiBike NYC), location services, etc. The availability of all this body of data allows us to better understand how people interacts with these services. However, in order for this information to be useful, it is necessary to develop tools to extract knowledge from it and to drive better decisions. In this context, optimization is a powerful tool, which can be used to improve the way the available resources are used, avoid squandering, and improve the sustainability of services. The fields of meta-heuristics, artificial intelligence, and oper- ations research, have been tackling many of these problems for years, without much interaction. However, in the last few years, such communities have started looking at each other’s advance- ments, in order to develop optimization techniques that are faster, more robust, and easier to maintain. This effort gave birth to the fertile field of hybrid meta-heuristics.openDottorato di ricerca in Ingegneria industriale e dell'informazioneopenUrli, Tommas

    Thinking through actions with things : a systemic perspective on analytic problem solving and mental arithmetic

    Get PDF
    In solving everyday problems or making sense of situations, people interact with local resources, both material and cultural (Kirsh, 2009a). Through these interactions with the world, thinking emerges from within and beyond the boundaries of the mind. Traditional frameworks specify that problem solving proceeds from initial state to goal state through the transformation of a mental representation of the problem by the retrieval and manipulation of symbols and rules previously stored in memory. Information garnered through bodily actions or from transactions with the world is considered to be a passive input. As a result, classical models of cognitive psychology frequently overlook the impact of the interaction between an individual and the environment on cognition. The experiments reported here were designed to inform a different model of problem solving that included the ubiquitous nature of interactivity in daily life by examining problem solving using artefacts. This research programme began with two experiments using an analytical problem, namely the river-corssing task. These experiments offered a platform to investigate the role of interactivity in shaping and transforming the problem presented. However, the problem space in the river-crossing task is relatively narrow and the research programme proceeded to three further experiments, this time using mental arithmetic tasks where participants were invited to complete long sums. These problems afford a much larger problem space, and a better opportunity to monitor how participants' action shape the physical presentation of the problem. Different task ecologies were used in the five experiments to contrast different levels of interactivity. In a low interactivity condition, solvers relied predominantly on internal mental resource; in a high interactivity condition participants were invited to use artefacts that corresponded to key features of the problen in producing a solution. Results from all experiments confirmed that increasing interactivity improved performance. The outcomes from the river-crossing experiments informed accounts of transferm as it was revealed that attempting the problem initially in a low interactivity condition followed by the high interactivity condition resulted in the most efficient learning experience. The conjecture being that learning of a more deliberative nature was experienced in the low interactivity version of the problem when followed by the opportunity to showcase this learning through the enactment of moves quickly in a second attempt that fostered as high level of interactivity. The mental arithmetic experiments revealed that a high level of interactivity not only produced greater accuarcy and efficiency, but participants were also able to enact different arithmetic knowledge as they reconfigured the problem. In addition, the findings indicated that: maths anxiety for long additions could be mitigatd through increased interaction with artefacts; trajectories for problem solving and the final solutions varied across differing interactive contexts; and the opportunity to manipulate artefacts appeared to diminish individual differences in mathematical skills. The varied task ecologies for the problems in these experiments altered performance and shaped differing trajectories to solution. These results imply, that in order to establish a more complete understanding of cognition in action, problem solving theories should reflect the situated, dynamic interaction between agent and environment and hence, the unfolding nature of problems and their emerging solutions. The findings and methods reported here suggest that a methodology blending traditional quantitative techniques with a more qualitative ideographic cognitive science would make a substantial contribution to problem solving research and theory

    Engineering SAT Applications

    Get PDF
    Das Erfüllbarkeitsproblem der Aussagenlogik (SAT) ist nicht nur in der theoretischen Informatik ein grundlegendes Problem, da alle NP-vollständigen Probleme auf SAT zurückgeführt werden können. Durch die Entwicklung von sehr effizienten SAT Lösern sind in den vergangenen 15 Jahren auch eine Vielzahl von praktischen Anwendungsmöglichkeiten entwickelt worden. Zu den bekanntesten gehört die Verifikation von Hardware- und Software-Bausteinen. Bei der Berechnung von unerfüllbaren SAT-Problemen sind Entwickler und Anwender oftmals an einer Erklärung für die Unerfüllbarkeit interessiert. Eine Möglichkeit diese zu ermitteln ist die Berechnung von minimal unerfüllbaren Teilformeln. Es sind drei grundlegend verschiedene Strategien zur Berechnung dieser Teilformeln bekannt: mittels Einfügen von Klauseln in ein erfüllbares Teilproblem, durch Entfernen von Kauseln aus einem unerfüllbaren Teilproblem und eine Kombination der beiden erstgenannten Methoden. In der vorliegenden Arbeit entwickeln wir zuerst eine interaktive Variante der Strategie, die auf Entfernen von Klauseln basiert. Sie ermöglicht es den Anwendern interessante Bereiche des Suchraumes manuell zu erschließen und aussagekräftige Erklärung für die Unerfüllbarkeit zu ermitteln. Der theoretische Hintergrund, der für die interaktive Berechnung von minimal unerfüllbaren Teilformeln entwickelt wurde, um dem Benutzer des Prototyps unnötige Schritte in der Berechnung der Teilformeln zu ersparen werden im Anschluss für die automatische Aufzählung von mehreren minimal unerfüllbaren Teilformeln verwendet, um dort die aktuell schnellsten Algorithmen weiter zu verbessern. Die Idee dabei ist mehrere Klauseln zu einem Block zusammenzufassen. Wir zeigen, wie diese Blöcke die Berechnungen von minimal unerfüllbaren Teilformeln positiv beeinflussen können. Durch die Implementierung eines Prototypen, der auf den aktuellen Methoden basiert, konnten wir die Effektivität unserer entwickelten Ideen belegen. Nachdem wir im ersten Teil der Arbeit grundlegende Algorithmen, die bei unerfüllbaren SAT-Problemen angewendet werden, verbessert haben, wenden wir uns im zweiten Teil der Arbeit neuen Anwendungsmöglichkeiten für SAT zu. Zuerst steht dabei ein Problem aus der Bioinformatik im Mittelpunkt. Wir lösen das sogenannte Kompatibilitätproblem für evolutionäre Bäume mittels einer Kodierung als Erfüllbarkeitsproblem und zeigen anschließend, wie wir mithilfe dieser neuen Kodierung ein nah verwandtes Optimierungsproblem lösen können. Den von uns neu entwickelten Ansatz vergleichen wir im Anschluss mit den bisher effektivsten Ansätzen das Optmierungsproblem zu lösen. Wir konnten zeigen, dass wir für den überwiegenden Teil der getesteten Instanzen neue Bestwerte in der Berechnungszeit erreichen. Die zweite neue Anwendung von SAT ist ein Problem aus der Graphentheorie, bzw. dem Graphenzeichen. Durch eine schlichte, intuitive, aber dennoch effektive Formulierung war es uns möglich neue Resultate für das Book Embedding Problem zu ermitteln. Zum einen konnten wir eine nicht triviale untere Schranke von vier für die benötigte Seitenzahl von 1-planaren Graphen ermitteln. Zum anderen konnten wir zeigen, dass es nicht für jeden planaren Graphen möglich ist, eine Einbettung in drei Seiten mittels einer sogenannten Schnyder-Aufteilung in drei verschiedene Bäume zu berechnen

    Exact methods for Bayesian network structure learning and cost function networks

    Get PDF
    Les modèles graphiques discrets représentent des fonctions jointes sur de grands ensembles de variables en tant qu'une combinaison de fonctions plus petites. Il existe plusieurs instanciations de modèles graphiques, notamment des modèles probabilistes et dirigés comme les réseaux Bayésiens, ou des modèles déterministes et non-dirigés comme les réseaux de fonctions de coûts. Des requêtes comme trouver l'explication la plus probable (MPE) sur un réseau Bayésiens, et son équivalent, trouver une solution de coût minimum sur un réseau de fonctions de coût, sont toutes les deux des tâches d’optimisation combinatoire NP-difficiles. Il existe cependant des techniques de résolution robustes qui ont une large gamme de domaines d'applications, notamment les réseaux de régulation de gènes, l'analyse de risques et le traitement des images. Dans ce travail, nous contribuons à l'état de l'art de l'apprentissage de la structure des réseaux Bayésiens (BNSL), et répondons à des requêtes de MPE et de minimisation des coûts sur les réseaux Bayésiens et les réseaux de fonctions de coûts. Pour le BNSL, nous découvrons un nouveau point dans l'espace de conception des algorithmes de recherche qui atteint un compromis différent entre la qualité et la vitesse de l'inférence. Les algorithmes existants optent soit pour la qualité maximale de l'inférence en utilisant la programmation linéaire en nombres entiers (PLNE) et la séparation et évaluation, soit pour la vitesse de l'inférence en utilisant la programmation par contraintes (PPC). Nous définissons des propriétés d'une classe spéciale d'inégalités, qui sont appelées "les inégalités de cluster" et qui mènent à un algorithme avec une qualité d'inférence beaucoup plus puissante que celle basée sur la PPC, et beaucoup plus rapide que celle basée sur la PLNE. Nous combinons cet algorithme avec des idées originales pour une propagation renforcée ainsi qu'une représentation de domaines plus compacte, afin d'obtenir des performances dépassant l'état de l'art dans le solveur open source ELSA (Exact Learning of bayesian network Structure using Acyclicity reasoning). Pour les réseaux de fonctions de coûts, nous identifions une faiblesse dans l'utilisation de la relaxation continue dans une classe spécifique de solveurs, y compris le solveur primé "ToulBar2". Nous prouvons que cette faiblesse peut entraîner des décisions de branchement sous-optimales et montrons comment détecter un ensemble maximal de telles décisions qui peuvent ensuite être évitées par le solveur. Cela permet à ToulBar2 de résoudre des problèmes qui étaient auparavant solvables uniquement par des algorithmes hybrides.Discrete Graphical Models (GMs) represent joint functions over large sets of discrete variables as a combination of smaller functions. There exist several instantiations of GMs, including directed probabilistic GMs like Bayesian Networks (BNs) and undirected deterministic models like Cost Function Networks (CFNs). Queries like Most Probable Explanation (MPE) on BNs and its equivalent on CFNs, which is cost minimisation, are NP-hard, but there exist robust solving techniques which have found a wide range of applications in fields such as bioinformatics, image processing, and risk analysis. In this thesis, we make contributions to the state of the art in learning the structure of BNs, namely the Bayesian Network Structure Learning problem (BNSL), and answering MPE and minimisation queries on BNs and CFNs. For BNSL, we discover a new point in the design space of search algorithms, which achieves a different trade-off between inference strength and speed of inference. Existing algorithms for it opt for either maximal strength of inference, like the algorithms based on Integer Programming (IP) and branch-and-cut, or maximal speed of inference, like the algorithms based on Constraint Programming (CP). We specify properties of a specific class of inequalities, called cluster inequalities, which lead to an algorithm that performs much stronger inference than that based on CP, much faster than that based on IP. We combine this with novel ideas for stronger propagation and more compact domain representations to achieve state-of-the-art performance in the open-source solver ELSA (Exact Learning of bayesian network Structure using Acyclicity reasoning). For CFNs, we identify a weakness in the use of linear programming relaxations by a specific class of solvers, which includes the award-winning open-source ToulBar2 solver. We prove that this weakness can lead to suboptimal branching decisions and show how to detect maximal sets of such decisions, which can then be avoided by the solver. This allows ToulBar2 to tackle problems previously solvable only by hybrid algorithms

    Evolutionary Computation 2020

    Get PDF
    Intelligent optimization is based on the mechanism of computational intelligence to refine a suitable feature model, design an effective optimization algorithm, and then to obtain an optimal or satisfactory solution to a complex problem. Intelligent algorithms are key tools to ensure global optimization quality, fast optimization efficiency and robust optimization performance. Intelligent optimization algorithms have been studied by many researchers, leading to improvements in the performance of algorithms such as the evolutionary algorithm, whale optimization algorithm, differential evolution algorithm, and particle swarm optimization. Studies in this arena have also resulted in breakthroughs in solving complex problems including the green shop scheduling problem, the severe nonlinear problem in one-dimensional geodesic electromagnetic inversion, error and bug finding problem in software, the 0-1 backpack problem, traveler problem, and logistics distribution center siting problem. The editors are confident that this book can open a new avenue for further improvement and discoveries in the area of intelligent algorithms. The book is a valuable resource for researchers interested in understanding the principles and design of intelligent algorithms
    corecore