14 research outputs found

    General general game AI

    Get PDF
    Arguably the grand goal of artificial intelligence research is to produce machines with general intelligence: the capacity to solve multiple problems, not just one. Artificial intelligence (AI) has investigated the general intelligence capacity of machines within the domain of games more than any other domain given the ideal properties of games for that purpose: controlled yet interesting and computationally hard problems. This line of research, however, has so far focused solely on one specific way of which intelligence can be applied to games: playing them. In this paper, we build on the general game-playing paradigm and expand it to cater for all core AI tasks within a game design process. That includes general player experience and behavior modeling, general non-player character behavior, general AI-assisted tools, general level generation and complete game generation. The new scope for general general game AI beyond game-playing broadens the applicability and capacity of AI algorithms and our understanding of intelligence as tested in a creative domain that interweaves problem solving, art, and engineering.peer-reviewe

    Generation and Analysis of Content for Physics-Based Video Games

    Get PDF
    The development of artificial intelligence (AI) techniques that can assist with the creation and analysis of digital content is a broad and challenging task for researchers. This topic has been most prevalent in the field of game AI research, where games are used as a testbed for solving more complex real-world problems. One of the major issues with prior AI-assisted content creation methods for games has been a lack of direct comparability to real-world environments, particularly those with realistic physical properties to consider. Creating content for such environments typically requires physics-based reasoning, which imposes many additional complications and restrictions that must be considered. Addressing and developing methods that can deal with these physical constraints, even if they are only within simulated game environments, is an important and challenging task for AI techniques that intend to be used in real-world situations. The research presented in this thesis describes several approaches to creating and analysing levels for the physics-based puzzle game Angry Birds, which features a realistic 2D environment. This research was multidisciplinary in nature and covers a wide variety of different AI fields, leading to this thesis being presented as a compilation of published work. The central part of this thesis consists of procedurally generating levels for physics-based games similar to those in Angry Birds. This predominantly involves creating and placing stable structures made up of many smaller blocks, as well as other level elements. Multiple approaches are presented, including both fully autonomous and human-AI collaborative methodologies. In addition, several analyses of Angry Birds levels were carried out using current state-of-the-art agents. A hyper-agent was developed that uses machine learning to estimate the performance of each agent in a portfolio for an unknown level, allowing it to select the one most likely to succeed. Agent performance on levels that contain deceptive or creative properties was also investigated, allowing determination of the current strengths and weaknesses of different AI techniques. The observed variability in performance across levels for different AI techniques led to the development of an adaptive level generation system, allowing for the dynamic creation of increasingly challenging levels over time based on agent performance analysis. An additional study also investigated the theoretical complexity of Angry Birds levels from a computational perspective. While this research is predominately applied to video games with physics-based simulated environments, the challenges and problems solved by the proposed methods also have significant real-world potential and applications

    Vision 2040: A Roadmap for Integrated, Multiscale Modeling and Simulation of Materials and Systems

    Get PDF
    Over the last few decades, advances in high-performance computing, new materials characterization methods, and, more recently, an emphasis on integrated computational materials engineering (ICME) and additive manufacturing have been a catalyst for multiscale modeling and simulation-based design of materials and structures in the aerospace industry. While these advances have driven significant progress in the development of aerospace components and systems, that progress has been limited by persistent technology and infrastructure challenges that must be overcome to realize the full potential of integrated materials and systems design and simulation modeling throughout the supply chain. As a result, NASA's Transformational Tools and Technology (TTT) Project sponsored a study (performed by a diverse team led by Pratt & Whitney) to define the potential 25-year future state required for integrated multiscale modeling of materials and systems (e.g., load-bearing structures) to accelerate the pace and reduce the expense of innovation in future aerospace and aeronautical systems. This report describes the findings of this 2040 Vision study (e.g., the 2040 vision state; the required interdependent core technical work areas, Key Element (KE); identified gaps and actions to close those gaps; and major recommendations) which constitutes a community consensus document as it is a result of over 450 professionals input obtain via: 1) four society workshops (AIAA, NAFEMS, and two TMS), 2) community-wide survey, and 3) the establishment of 9 expert panels (one per KE) consisting on average of 10 non-team members from academia, government and industry to review, update content, and prioritize gaps and actions. The study envisions the development of a cyber-physical-social ecosystem comprised of experimentally verified and validated computational models, tools, and techniques, along with the associated digital tapestry, that impacts the entire supply chain to enable cost-effective, rapid, and revolutionary design of fit-for-purpose materials, components, and systems. Although the vision focused on aeronautics and space applications, it is believed that other engineering communities (e.g., automotive, biomedical, etc.) can benefit as well from the proposed framework with only minor modifications. Finally, it is TTT's hope and desire that this vision provides the strategic guidance to both public and private research and development decision makers to make the proposed 2040 vision state a reality and thereby provide a significant advancement in the United States global competitiveness

    Automated iterative game design

    Get PDF
    Computational systems to model aspects of iterative game design were proposed, encompassing: game generation, sampling behaviors in a game, analyzing game behaviors for patterns, and iteratively altering a game design. Explicit models of the actions in games as planning operators allowed an intelligent system to reason about how actions and action sequences affect gameplay and to create new mechanics. Metrics to analyze differences in player strategies were presented and were able to identify flaws in game designs. An intelligent system learned design knowledge about gameplay and was able to reduce the number of design iterations needed during playtesting a game to achieve a design goal. Implications for how intelligent systems augment and automate human game design practices are discussed.Ph.D

    3D multiple description coding for error resilience over wireless networks

    Get PDF
    Mobile communications has gained a growing interest from both customers and service providers alike in the last 1-2 decades. Visual information is used in many application domains such as remote health care, video –on demand, broadcasting, video surveillance etc. In order to enhance the visual effects of digital video content, the depth perception needs to be provided with the actual visual content. 3D video has earned a significant interest from the research community in recent years, due to the tremendous impact it leaves on viewers and its enhancement of the user’s quality of experience (QoE). In the near future, 3D video is likely to be used in most video applications, as it offers a greater sense of immersion and perceptual experience. When 3D video is compressed and transmitted over error prone channels, the associated packet loss leads to visual quality degradation. When a picture is lost or corrupted so severely that the concealment result is not acceptable, the receiver typically pauses video playback and waits for the next INTRA picture to resume decoding. Error propagation caused by employing predictive coding may degrade the video quality severely. There are several ways used to mitigate the effects of such transmission errors. One widely used technique in International Video Coding Standards is error resilience. The motivation behind this research work is that, existing schemes for 2D colour video compression such as MPEG, JPEG and H.263 cannot be applied to 3D video content. 3D video signals contain depth as well as colour information and are bandwidth demanding, as they require the transmission of multiple high-bandwidth 3D video streams. On the other hand, the capacity of wireless channels is limited and wireless links are prone to various types of errors caused by noise, interference, fading, handoff, error burst and network congestion. Given the maximum bit rate budget to represent the 3D scene, optimal bit-rate allocation between texture and depth information rendering distortion/losses should be minimised. To mitigate the effect of these errors on the perceptual 3D video quality, error resilience video coding needs to be investigated further to offer better quality of experience (QoE) to end users. This research work aims at enhancing the error resilience capability of compressed 3D video, when transmitted over mobile channels, using Multiple Description Coding (MDC) in order to improve better user’s quality of experience (QoE). Furthermore, this thesis examines the sensitivity of the human visual system (HVS) when employed to view 3D video scenes. The approach used in this study is to use subjective testing in order to rate people’s perception of 3D video under error free and error prone conditions through the use of a carefully designed bespoke questionnaire.EThOS - Electronic Theses Online ServicePetroleum Technology Development Fund (PTDF)GBUnited Kingdo

    Simulating Land Use Land Cover Change Using Data Mining and Machine Learning Algorithms

    Get PDF
    The objectives of this dissertation are to: (1) review the breadth and depth of land use land cover (LUCC) issues that are being addressed by the land change science community by discussing how an existing model, Purdue\u27s Land Transformation Model (LTM), has been used to better understand these very important issues; (2) summarize the current state-of-the-art in LUCC modeling in an attempt to provide a context for the advances in LUCC modeling presented here; (3) use a variety of statistical, data mining and machine learning algorithms to model single LUCC transitions in diverse regions of the world (e.g. United States and Africa) in order to determine which tools are most effective in modeling common LUCC patterns that are nonlinear; (4) develop new techniques for modeling multiple class (MC) transitions at the same time using existing LUCC models as these models are rare and in great demand; (5) reconfigure the existing LTM for urban growth boundary (UGB) simulation because UGB modeling has been ignored by the LUCC modeling community, and (6) compare two rule based models for urban growth boundary simulation for use in UGB land use planning. The review of LTM applications during the last decade indicates that a model like the LTM has addressed a majority of land change science issues although it has not explicitly been used to study terrestrial biodiversity issues. The review of the existing LUCC models indicates that there is no unique typology to differentiate between LUCC model structures and no models exist for UGB. Simulations designed to compare multiple models show that ANN-based LTM results are similar to Multivariate Adaptive Regression Spline (MARS)-based models and both ANN and MARS-based models outperform Classification and Regression Tree (CART)-based models for modeling single LULC transition; however, for modeling MC, an ANN-based LTM-MC is similar in goodness of fit to CART and both models outperform MARS in different regions of the world. In simulations across three regions (two in United States and one in Africa), the LTM had better goodness of fit measures while the outcome of CART and MARS were more interpretable and understandable than the ANN-based LTM. Modeling MC LUCC require the examination of several class separation rules and is thus more complicated than single LULC transition modeling; more research is clearly needed in this area. One of the greatest challenges identified with MC modeling is evaluating error distributions and map accuracies for multiple classes. A modified ANN-based LTM and a simple rule based UGBM outperformed a null model in all cardinal directions. For UGBM model to be useful for planning, other factors need to be considered including a separate routine that would determine urban quantity over time

    Activation of the pro-resolving receptor Fpr2 attenuates inflammatory microglial activation

    Get PDF
    Poster number: P-T099 Theme: Neurodegenerative disorders & ageing Activation of the pro-resolving receptor Fpr2 reverses inflammatory microglial activation Authors: Edward S Wickstead - Life Science & Technology University of Westminster/Queen Mary University of London Inflammation is a major contributor to many neurodegenerative disease (Heneka et al. 2015). Microglia, as the resident immune cells of the brain and spinal cord, provide the first line of immunological defence, but can become deleterious when chronically activated, triggering extensive neuronal damage (Cunningham, 2013). Dampening or even reversing this activation may provide neuronal protection against chronic inflammatory damage. The aim of this study was to determine whether lipopolysaccharide (LPS)-induced inflammation could be abrogated through activation of the receptor Fpr2, known to play an important role in peripheral inflammatory resolution. Immortalised murine microglia (BV2 cell line) were stimulated with LPS (50ng/ml) for 1 hour prior to the treatment with one of two Fpr2 ligands, either Cpd43 or Quin-C1 (both 100nM), and production of nitric oxide (NO), tumour necrosis factor alpha (TNFα) and interleukin-10 (IL-10) were monitored after 24h and 48h. Treatment with either Fpr2 ligand significantly suppressed LPS-induced production of NO or TNFα after both 24h and 48h exposure, moreover Fpr2 ligand treatment significantly enhanced production of IL-10 48h post-LPS treatment. As we have previously shown Fpr2 to be coupled to a number of intracellular signaling pathways (Cooray et al. 2013), we investigated potential signaling responses. Western blot analysis revealed no activation of ERK1/2, but identified a rapid and potent activation of p38 MAP kinase in BV2 microglia following stimulation with Fpr2 ligands. Together, these data indicate the possibility of exploiting immunomodulatory strategies for the treatment of neurological diseases, and highlight in particular the important potential of resolution mechanisms as novel therapeutic targets in neuroinflammation. References Cooray SN et al. (2013). Proc Natl Acad Sci U S A 110: 18232-7. Cunningham C (2013). Glia 61: 71-90. Heneka MT et al. (2015). Lancet Neurol 14: 388-40
    corecore