24 research outputs found

    Synthesizing a Progression of Subtasks for Block-Based Visual Programming Tasks

    Full text link
    Block-based visual programming environments play an increasingly important role in introducing computing concepts to K-12 students. In recent years, they have also gained popularity in neuro-symbolic AI, serving as a benchmark to evaluate general problem-solving and logical reasoning skills. The open-ended and conceptual nature of these visual programming tasks make them challenging, both for state-of-the-art AI agents as well as for novice programmers. A natural approach to providing assistance for problem-solving is breaking down a complex task into a progression of simpler subtasks; however, this is not trivial given that the solution codes are typically nested and have non-linear execution behavior. In this paper, we formalize the problem of synthesizing such a progression for a given reference block-based visual programming task. We propose a novel synthesis algorithm that generates a progression of subtasks that are high-quality, well-spaced in terms of their complexity, and solving this progression leads to solving the reference task. We show the utility of our synthesis algorithm in improving the efficacy of AI agents (in this case, neural program synthesizers) for solving tasks in the Karel programming environment. Then, we conduct a user study to demonstrate that our synthesized progression of subtasks can assist a novice programmer in solving tasks in the Hour of Code: Maze Challenge by Code-dot-org

    Curriculum Learning in Job Shop Scheduling using Reinforcement Learning

    Get PDF
    Solving job shop scheduling problems (JSSPs) with a fixed strategy, such as a priority dispatching rule, may yield satisfactory results for several problem instances but, nevertheless, insufficient results for others. From this single-strategy perspective finding a near optimal solution to a specific JSSP varies in difficulty even if the machine setup remains the same. A recent intensively researched and promising method to deal with difficulty variability is Deep Reinforcement Learning (DRL), which dynamically adjusts an agent's planning strategy in response to difficult instances not only during training, but also when applied to new situations. In this paper, we further improve DLR as an underlying method by actively incorporating the variability of difficulty within the same problem size into the design of the learning process. We base our approach on a state-of-the-art methodology that solves JSSP by means of DRL and graph neural network embeddings. Our work supplements the training routine of the agent by a curriculum learning strategy that ranks the problem instances shown during training by a new metric of problem instance difficulty. Our results show that certain curricula lead to significantly better performances of the DRL solutions. Agents trained on these curricula beat the top performance of those trained on randomly distributed training data, reaching 3.2% shorter average makespans

    On The Effectiveness Of Bottleneck Information For Solving Job Shop Scheduling Problems Using Deep Reinforcement Learning

    Get PDF
    Job shop scheduling problems (JSSPs) have been the subject of intense studies for decades because they are often at the core of significant industrial planning challenges and have a high optimization potential. As a result, the scientific community has developed clever heuristics to approximate optimal solutions. A prominent example is the shifting bottleneck heuristic, which iteratively identifies bottlenecks in the current schedule and uses this information to apply targeted optimization steps. In recent years, deep reinforcement learning (DRL) has gained increasing attention for solving scheduling problems in job shops and beyond. One design decision when applying DRL to JSSPs is the observation, i.e., the descriptive representation of the current problem and solution state. Interestingly, DRL solutions do not make use of explicit notions of bottlenecks that have been developed in the past when designing the observation. In this paper, we investigate ways to leverage a definition of bottlenecks inspired by the shifting bottleneck heuristic for JSSPs with DRL to increase the effectiveness and efficiency of model training. To this end, we train two different DRL base models with and without bottleneck features. However, our results indicate that previously developed bottleneck definitions neither increase training efficiency nor final model performance

    “Polatlı Müzakereci-Arabulucu-Lider Öğrenci Yetişiyor” Projesinin Değerlendirilmesi

    Get PDF
    Bu çalışmanın amacı, Ankara’nın Polatlı ilçesinde bulunan liselerde, gençler arasında yaşanan anlaşmazlıkların değişen dünya değerleri çerçevesinde belirlenerek, yenilikçi ve barışçıl uygulamalarla yönetilmesini ve dönüştürülmesini hedefleyen “Polatlı Müzakereci Arabulucu Lider Öğrenci Yetişiyor” projesinin etkililiğini incelemektir. Proje süresince, Polatlı’da bulunan 10 liseden 394’ü kız 435’i erkek, toplam 829 öğrenciye “Anlaşmazlık Çözümü, Müzakere ve Akran Arabuluculuk Eğitim Programı” uygulanmıştır. Nitel araştırma yöntemi kullanılan çalışmada Süreç Değerlendirme Görüşme Soruları, Akran-Arabuluculuk Formu, Program Değerlendirme Anketi olmak üzere toplam üç veri toplama aracı kullanılmıştır. Araştırma bulguları, arabuluculuk eğitiminin, öğrencilerin sorun çözme becerilerine, arkadaşlık ilişkilerine, sosyalleşmelerine ve özgüvenlerine olumlu katkılar sağladığını, okulda disiplin olaylarında azalmaya neden olduğunu göstermiştir. Ayrıca arabuluculuk toplantılarında anlaşmazlığın uzlaşma ve anlaşma ile sonuçlanma oranı %96,5 olarak tespit edilmiştir

    Artificial Intelligence in Automotive Production

    No full text
    Deep Learning (DL), Artificial Intelligence (AI), Machine Learning (ML): Three terms, often used synonymously, that stand for a new kind of intelligent systems. Companies worldwide invest financial and human resources to tap the potential and promises of these technologies for themselves: be it in the establishment of data science departments or of powerful computer clusters. The automotive industry is no exception – thereby, with a prominent media focus on “autonomous driving”. However, this is not the only application area for Artificial Intelligence in the automotive domain. The use of machine learning is also researched and applied in automotive production plants: From the use in the body shop all the way to predictive estimations of what proportion of a component is damaged. In this contribution, we discuss the use of Artificial Intelligence in practical examples of automotive production and point out which challenges exist and which approaches are promising. At the same time, we discuss and evaluate the potentials and challenges

    Industrial Transfer Learning : Boosting Machine Learning in Production

    No full text

    schlably: A Python framework for deep reinforcement learning based scheduling experiments

    No full text
    Research on deep reinforcement learning (DRL) based production scheduling (PS) has gained a lot of attention in recent years, primarily due to the high demand for optimizing scheduling problems in diverse industry settings. Numerous studies are carried out and published as stand-alone experiments that often vary only slightly with respect to problem setups and solution approaches. The programmatic core of these experiments is typically very similar. Despite this fact, no standardized and resilient framework for experimentation on PS problems with DRL algorithms could be established so far. In this paper, we introduce schlably, a Python-based framework that provides researchers a comprehensive toolset to facilitate the development of PS solution strategies based on DRL. schlably eliminates the redundant overhead work that the creation of a sturdy and flexible backbone requires and increases the comparability and reusability of conducted research work
    corecore