847 research outputs found

    The Computational Complexity of Angry Birds

    Full text link
    The physics-based simulation game Angry Birds has been heavily researched by the AI community over the past five years, and has been the subject of a popular AI competition that is currently held annually as part of a leading AI conference. Developing intelligent agents that can play this game effectively has been an incredibly complex and challenging problem for traditional AI techniques to solve, even though the game is simple enough that any human player could learn and master it within a short time. In this paper we analyse how hard the problem really is, presenting several proofs for the computational complexity of Angry Birds. By using a combination of several gadgets within this game's environment, we are able to demonstrate that the decision problem of solving general levels for different versions of Angry Birds is either NP-hard, PSPACE-hard, PSPACE-complete or EXPTIME-hard. Proof of NP-hardness is by reduction from 3-SAT, whilst proof of PSPACE-hardness is by reduction from True Quantified Boolean Formula (TQBF). Proof of EXPTIME-hardness is by reduction from G2, a known EXPTIME-complete problem similar to that used for many previous games such as Chess, Go and Checkers. To the best of our knowledge, this is the first time that a single-player game has been proven EXPTIME-hard. This is achieved by using stochastic game engine dynamics to effectively model the real world, or in our case the physics simulator, as the opponent against which we are playing. These proofs can also be extended to other physics-based games with similar mechanics.Comment: 55 Pages, 39 Figure

    Analysis and Control of Flywheel Energy Storage Systems

    Get PDF

    Minimizing User Effort in Large Scale Example-driven Data Exploration

    Get PDF
    Data Exploration is a key ingredient in a widely diverse set of discovery-oriented applications, including scientific computing, financial analysis, and evidence-based medicine. It refers to a series of exploratory tasks that aim to extract useful pieces of knowledge from data, and its challenge is to do so without requiring the user to specify with precision what information is being searched for. The goal of assisting users in constructing their exploratory queries effortlessly, which effectively reveals interesting data objects, has led to the development of a variety of intelligent semi-automatic approaches. Among such approaches, Example-driven Exploration is rapidly becoming an attractive choice for exploratory query formulation since it attempts to minimize the amount of prior knowledge required from the user to form an accurate exploratory query. In particular, this dissertation focuses on interactive Example-driven Exploration, which steers the user towards discovering all data objects relevant to the users’ exploration based on their feedback on a small set of examples. Interactive Example-driven Exploration is especially beneficial for non-expert users, as it enables them to circumvent query languages by assigning relevancy to examples as a proxy for the intended exploratory analysis. However, existing interactive Example-driven Exploration systems fall short of supporting the need to perform complex explorations over large, unstructured high-dimensional data. To overcome these challenges, we have developed new methods of data reduction, example selection, data indexing, and result refinement that support practical, interactive data exploration. The novelty of our approach is anchored on leveraging active learning and query optimization techniques that strike a balance between maximizing accuracy and minimizing user effort in providing feedback while enabling interactive performance for exploration tasks with arbitrary, large-sized datasets. Furthermore, it extends the exploration beyond the structured data by supporting a variety of high-dimensional unstructured data and enables the refinement of results when the exploration task is associated with too many relevant data objects that could be overwhelming to the user. To affirm the effectiveness of our proposed models, techniques, and algorithms, we implemented multiple prototype systems and evaluated them using real datasets. Some of them were also used in domain-specific analytics tools

    Physical Reasoning for Intelligent Agent in Simulated Environments

    No full text
    Developing Artificial Intelligence (AI) that is capable of understanding and interacting with the real world in a sophisticated way has long been a grand vision of AI. There is an increasing number of AI agents coming into our daily lives and assisting us with various daily tasks ranging from house cleaning to serving food in restaurants. While different tasks have different goals, the domains of the tasks all obey the physical rules (classic Newtonian physics) of the real world. To successfully interact with the physical world, an agent needs to be able to understand its surrounding environment, to predict the consequences of its actions and to draw plans that can achieve a goal without causing any unintended outcomes. Much of AI research over the past decades has been dedicated to specific sub-problems such as machine learning and computer vision, etc. Simply plugging in techniques from these subfields is far from creating a comprehensive AI agent that can work well in a physical environment. Instead, it requires an integration of methods from different AI areas that considers specific conditions and requirements of the physical environment. In this thesis, we identified several capabilities that are essential for AI to interact with the physical world, namely, visual perception, object detection, object tracking, action selection, and structure planning. As the real world is a highly complex environment, we started with developing these capabilities in virtual environments with realistic physics simulations. The central part of our methods is the combination of qualitative reasoning and standard techniques from different AI areas. For the visual perception capability, we developed a method that can infer spatial properties of rectangular objects from their minimum bounding rectangles. For the object detection capability, we developed a method that can detect unknown objects in a structure by reasoning about the stability of the structure. For the object tracking capability, we developed a method that can match perceptually indistinguishable objects in visual observations made before and after a physical impact. This method can identify spatial changes of objects in the physical event, and the result of matching can be used for learning the consequence of the impact. For the action selection capability, we developed a method that solves a hole-in-one problem that requires selecting an action out of an infinite number of actions with unknown consequences. For the structure planning capability, we developed a method that can arrange objects to form a stable and robust structure by reasoning about structural stability and robustness
    • …
    corecore