53 research outputs found
Validation of machine-oriented strategies in chess endgames
This thesis is concerned with the validation of chess endgame
strategies. It is also concerned with the synthesis of strategies
that can be validated. A strategy for a given player is the
specification of the move to be made by that player from any
position that may occur. This move may be dependent on the
previous moves of both sides. A strategy is said to be correct if
following the strategy always leads to an outcome of at least the
same game theoretic value as the starting position. We are not concerned with proving the correctness of programs
that implement the strategies under consideration. We shall be
working with knowledge-based programs which produce playing
strategies, and assume that their concrete implementations (in
POP2, PROLOG etc.) are correct. The synthesis approach taken attempts to use the large body
of heuristic knowledge and theory, accumulated over the centuries by chessmasters, to find playing strategies. Our concern here is
to produce structures for representing a chessmaster's knowledge
wnich can be analysed within a game theoretic model. The validation approach taken is that a theory of the domain
in the form of the game theoretic model of chess provides an objective measure of the
strategy followed by a program. Our concern here is to analyse the
structures created in the synthesis phase. This is an instance of
a general problem, that of quantifying the performance of
computing systems. In general to quantify the performance of a
system we need,- A theory of the domain.
- A specification of the problem to be solved.
- Algorithms and/or domain-specific knowledge to be
applied to solve the problem
Machine Learning Approaches for the Prioritisation of Cardiovascular Disease Genes Following Genome- wide Association Study
Genome-wide association studies (GWAS) have revealed thousands of genetic loci, establishing itself as a valuable method for unravelling the complex biology of many diseases. As GWAS has grown in size and improved in study design to detect effects, identifying real causal signals, disentangling from other highly correlated markers associated by linkage disequilibrium (LD) remains challenging. This has severely limited GWAS findings and brought the method’s value into question. Although thousands of disease susceptibility loci have been reported, causal variants and genes at these loci remain elusive. Post-GWAS analysis aims to dissect the heterogeneity of variant and gene signals. In recent years, machine learning (ML) models have been developed for post-GWAS prioritisation. ML models have ranged from using logistic regression to more complex ensemble models such as random forests and gradient boosting, as well as deep learning models (i.e., neural networks). When combined with functional validation, these methods have shown important translational insights, providing a strong evidence-based approach to direct post-GWAS research. However, ML approaches are in their infancy across biological applications, and as they continue to evolve an evaluation of their robustness for GWAS prioritisation is needed. Here, I investigate the landscape of ML across: selected models, input features, bias risk, and output model performance, with a focus on building a prioritisation framework that is applied to blood pressure GWAS results and tested on re-application to blood lipid traits
Computer-based methods of knowledge generation in science - What can the computer tell us about the world?
Der Computer hat die wissenschaftliche Praxis in fast allen Disziplinen signifikant verändert. Neben traditionellen Quellen für neue Erkenntnisse wie beispielsweise Beobachtungen, deduktiven Argumenten oder Experimenten, werden nun regelmäßig auch computerbasierte Methoden wie ‚Computersimulationen‘ und ‚Machine Learning‘ als solche Quellen genannt. Dieser Wandel in der Wissenschaft bringt wissenschaftsphilosophische Fragen in Bezug auf diese neuen Methoden mit sich. Eine der naheliegendsten Fragen ist dabei, ob diese neuen Methoden dafür geeignet sind, als Quellen für neue Erkenntnisse zu dienen. Dieser Frage wird in der vorliegenden Arbeit nachgegangen, wobei ein besonderer Fokus auf einem der zentralen Probleme der computerbasierten Methoden liegt: der Opazität. Computerbasierte Methoden werden als opak bezeichnet, wenn der kausale Zusammenhang zwischen Input und Ergebnis nicht nachvollziehbar ist. Zentrale Fragen dieser Arbeit sind, ob Computersimulationen und Machine Learning Algorithmen opak sind, ob die Opazität bei beiden Methoden von der gleichen Natur ist und ob die Opazität verhindert, mit computerbasierten Methoden neue Erkenntnisse zu erlangen. Diese Fragen werden nah an der naturwissenschaftlichen Praxis untersucht; insbesondere die Teilchenphysik und das ATLAS-Experiment am CERN dienen als wichtige Fallbeispiele.
Die Arbeit basiert auf fünf Artikeln. In den ersten beiden Artikeln werden Computersimulationen mit zwei anderen Methoden – Experimenten und Argumenten – verglichen, um sie methodologisch einordnen zu können und herauszuarbeiten, welche Herausforderungen beim Erkenntnisgewinn Computersimulationen von den anderen Methoden unterscheiden. Im ersten Artikel werden Computersimulationen und Experimente verglichen. Aufgrund der Vielfalt an Computersimulationen ist es jedoch nicht sinnvoll, einen pauschalen Vergleich mit Experimenten durchzuführen. Es werden verschiedene epistemische Aspekte herausgearbeitet, auf deren Basis der Vergleich je nach Anwendungskontext durchgeführt werden sollte. Im zweiten Artikel wird eine von Claus Beisbart formulierte Position diskutiert, die Computersimulationen als Argumente versteht. Dieser ‚Argument View‘ beschreibt die Funktionsweise von Computersimulationen sehr gut und ermöglicht es damit, Fragen zur Opazität und zum induktiven Charakter von Computersimulationen zu beantworten. Wie mit Computersimulationen neues Wissen erlangt werden kann, kann der Argument View alleine jedoch nicht ausreichend beantworten. Der dritte Artikel beschäftigt sich mit der Rolle von Modellen in der theoretischen Ökologie. Modelle sind zentraler Bestandteil von Computersimulationen und Machine Learning Algorithmen. Die Fragen über die Beziehung von Phänomenen und Modellen, die hier anhand von Beispielen aus der Ökologie betrachtet werden, sind daher für die epistemischen Fragen dieser Arbeit von zentraler Bedeutung. Der vierte Artikel bildet das Bindeglied zwischen den Themen Computersimulation und Machine Learning. In diesem Artikel werden verschiedene Arten von Opazität definiert und Computersimulationen und Machine Learning Algorithmen anhand von Beispielen aus der Teilchenphysik daraufhin untersucht, welche Arten von Opazität jeweils vorhanden sind. Es wird argumentiert, dass Opazität für den Erkenntnisgewinn mithilfe von Computer-simulationen kein prinzipielles Problem darstellt, Model-Opazität jedoch für Machine Learning Algorithmen eine Quelle von fundamentaler Opazität sein könnte. Im fünften Artikel wird dieselbe Terminologie auf den Bereich von Schachcomputern angewandt. Der Vergleich zwischen einem traditionellen Schachcomputer und einem Schachcomputer, der auf einem neuronalen Netz basiert ermöglicht die Illustration der Konsequenzen der unterschiedlichen Opazitäten.
Insgesamt ermöglicht die Arbeit eine methodische Einordnung von Computersimulationen und zeigt, dass sich weder mit einem Bezug auf Experimente noch auf Argumente alleine klären lässt, wie Computersimulationen zu neuen Erkenntnissen führen. Eine klare Definition der jeweils vorhanden Opazitäten ermöglicht eine Abgrenzung von den eng verwandten Machine Learning Algorithmen
Machine Learning
Machine Learning can be defined in various ways related to a scientific domain concerned with the design and development of theoretical and implementation tools that allow building systems with some Human Like intelligent behavior. Machine learning addresses more specifically the ability to improve automatically through experience
Monte-Carlo tree search using expert knowledge: an application to computer go and human genetics
Monte-Carlo Tree Search (MCTS la búsqueda en árbol mediante procesos estocásticos) se ha convertido en el algorÃtmo principal en muchos problemas de inteligencia artificial e informática. Esta tesis analiza la incorporación de conocimiento experto para mejorar la búsqueda. El trabajo describe dos aplicaciones: una en el 'juego del go' por el ordenador y otra en el campo de la genética humana. Es un hecho establecido que, en problemas complejos, MCTS requiere el apoyo de conocimiento especÃfico o aprendido online para mejorar su rendimiento. Lo que este trabajo analiza son diferentes ideas de cómo hacerlo, sus resultados e implicaciones, mejorando asà nuestra comprensión de MCTS. Las principales contribuciones al área son: un modelo analÃtico de las simulaciones que mejora la comprensión del papel de las simulaciones, un marco competitivo incluyendo código y datos para comparar métodos en etiologÃa genética y tres aplicaciones con éxito: una en el campo de las aperturas en go de 19x19 llamada M-eval, otra sobre simulaciones que aprenden y una en etiologÃa genética. Además, merece la pena destacar: un modelo para representar proporciones mediante estados llamado WLS con software libre, un resultado negativo sobre una idea para las simulaciones, el descubrimiento inesperado de un posible problema utilizando MCTS en optimización y un análisis original de las limitaciones
Languages of games and play: A systematic mapping study
Digital games are a powerful means for creating enticing, beautiful, educational, and often highly addictive interactive experiences that impact the lives of billions of players worldwide. We explore what informs the design and construction of good games to learn how to speed-up game development. In particular, we study to what extent languages, notations, patterns, and tools, can offer experts theoretical foundations, systematic techniques, and practical solutions they need to raise their productivity and improve the quality of games and play. Despite the growing number of publications on this topic there is currently no overview describing the state-of-the-art that relates research areas, goals, and applications. As a result, efforts and successes are often one-off, lessons learned go overlooked, language reuse remains minimal, and opportunities for collaboration and synergy are lost. We present a systematic map that identifies relevant publications and gives an overview of research areas and publication venues. In addition, we categorize research perspectives along common objectives, techniques, and approaches, illustrated by summaries of selected languages. Finally, we distill challenges and opportunities for future research and development
Recommended from our members
Proceedings of the Seventh International Symposium on Methodologies for Intelligent Systems (Poster Session)
This report contains the following papers: Implications in vivid logic; a self-learning bayesian expert system; a natural language generation system for a heterogeneous distributed database system; competence-switching'' managed by intelligent systems; strategy acquisition by an artificial neural network: Experiments in learning to play a stochastic game; viewpoints and selective inheritance in object-oriented modeling; multivariate discretization of continuous attributes for machine learning; utilization of the case-based reasoning method to resolve dynamic problems; formalization of an ontology of ceramic science in CLASSIC; linguistic tools for intelligent systems; an application of rough sets in knowledge synthesis; and a relational model for imprecise queries. These papers have been indexed separately
Exploring how entrepreneurs make decisions on the growth of their business: A cognitive perspective
The purpose of this study was to explore how entrepreneurs, who are past the start-up stage of business, evaluate and make decisions on growth opportunities. Small business growth is a complex, dynamic and episodic phenomenon and prior research on firm growth has emphasised cross-sectional approaches, rather than view growth as a dynamic process over time. Understanding small business entrepreneurs’ cognition and behaviours when making opportunity-related decisions will show how growth decisions are made. It is still unclear what cognitive styles and knowledge structures entrepreneurs use to process and frame information for opportunity-related decision-making. A closer look at opportunity evaluation, decision-making and entrepreneurial cognition revealed fragmentation, research gaps and areas for future research recommended by key scholars. As a consequence of this, an integrated process approach was taken using these three research streams. Specifically, a cognitive style lens, as a complex construct with multiple dimensions was used for viewing opportunity-related decisions, an approach missing from the opportunity evaluation literature. Additionally, the study was conceptually underpinned by dual process theory, the cognitive experiential self-theory or CEST. A longitudinal, concurrent triangulation design was used to explore the decision-making process over five time points in a two-year period. A mixed methods approach supported the pragmatic paradigm for an exploratory study. A multiple-case strategy used a sample of 11 small manufacturing entrepreneurs, from novice to mature, with 3-30 years’ experience as owner-manager. Data was collected at each time point using semi-structured interviews and two style assessments, the CoSI and REI. Quantitative data was analysed using descriptive statistics and thematic analysis for the qualitative data. Combining interviews and psychometric questionnaires for triangulation produced robust findings. Data was used to construct cognitive maps and cognitive complexity for insight. Findings showed entrepreneurs were high on more than one style and switched between styles according to context, demonstrating styles were orthogonal. A unique finding was a synthesised, versatile style observed as a ‘mirror effect’ between the analytical and intuitive styles. Novices developed a more intuitive style over time, contingent with experience. A developing link in the novices’ mental structures showed how past experience increased cognitive complexity and connectivity. A further unique finding showed the central concept ‘Thinks it through’ in the decision process as a structural conduit or 'Hub' for both analytical and intuitive processing. Analysis suggested that cognitive complexity mediated the relationship between creative and experiential information styles and successful opportunity-related decision-making effectiveness. These unique findings show opportunity-related decisions as a dynamic, time-based process. The time-based model provided a framework for future opportunity evaluation research as a contribution to theory. Likewise, a dual process and information processing perspective has offered an alternative structure for examining opportunity evaluation. Finally, a teaching model was developed to improve metacognitive thinking and connectivity for decision-making effectiveness as a contribution to practice
- …