3 research outputs found

    Increasing generality in machine learning through procedural content generation

    Get PDF
    Procedural Content Generation (PCG) refers to the practice, in videogames and other games, of generating content such as levels, quests, or characters algorithmically. Motivated by the need to make games replayable, as well as to reduce authoring burden, limit storage space requirements, and enable particular aesthetics, a large number of PCG methods have been devised by game developers. Additionally, researchers have explored adapting methods from machine learning, optimization, and constraint solving to PCG problems. Games have been widely used in AI research since the inception of the field, and in recent years have been used to develop and benchmark new machine learning algorithms. Through this practice, it has become more apparent that these algorithms are susceptible to overfitting. Often, an algorithm will not learn a general policy, but instead a policy that will only work for a particular version of a particular task with particular initial parameters. In response, researchers have begun exploring randomization of problem parameters to counteract such overfitting and to allow trained policies to more easily transfer from one environment to another, such as from a simulated robot to a robot in the real world. Here we review the large amount of existing work on PCG, which we believe has an important role to play in increasing the generality of machine learning methods. The main goal here is to present RL/AI with new tools from the PCG toolbox, and its secondary goal is to explain to game developers and researchers a way in which their work is relevant to AI research

    Autonomous dishwasher loading from cluttered trays using pre‐trained deep neural networks

    No full text
    Abstract: Autonomous dishwasher loading is a benchmark problem in robotics that highlights the challenges of robotic perception, planning, and manipulation in an unstructured environment. Current approaches resort to a specialized solution, however, these technologies are not viable in a domestic setting. Learning‐based solutions seem promising for a general purpose solutions; however, they require large amounts of catered data to be applied in real‐world scenarios. This article presents a novel learning‐based solution without a training phase using pre‐trained object detection networks. By developing a perception, planning, and manipulation framework around an off‐the‐shelf object detection network, we are able to develop robust pick‐and‐place solutions that are easy to develop and general purpose requiring only a RGB feedback and a pinch gripper. Analysis of a real‐world canteen tray data is first performed and used for developing our in‐lab experimental setup. Our results obtained from real‐world scenarios indicate that such approaches are highly desirable for plug‐and‐play domestic applications with limited calibration. All the associated data and code of this work are shared in a public repository
    corecore