126 research outputs found

    CLR-DRNets: Curriculum Learning with Restarts to Solve Visual Combinatorial Games

    Get PDF
    We introduce a curriculum learning framework for challenging tasks that require a combination of pattern recognition and combinatorial reasoning, such as single-player visual combinatorial games. Our work harnesses Deep Reasoning Nets (DRNets) [Chen et al., 2020], a framework that combines deep learning with constraint reasoning for unsupervised pattern demixing. We propose CLR-DRNets (pronounced Clear-DRNets), a curriculum-learning-with-restarts framework to boost the performance of DRNets. CLR-DRNets incrementally increase the difficulty of the training instances and use restarts, a new model selection method that selects multiple models from the same training trajectory to learn a set of diverse heuristics and apply them at inference time. An enhanced reasoning module is also proposed for CLR-DRNets to improve the ability of reasoning and generalize to unseen instances. We consider Visual Sudoku, i.e., Sudoku with hand-written digits or letters, and Visual Mixed Sudoku, a substantially more challenging task that requires the demixing and completion of two overlapping Visual Sudokus. We propose an enhanced reasoning module for the DRNets framework for encoding these visual games We show how CLR-DRNets considerably outperform DRNets and other approaches on these visual combinatorial games

    A Study of AI Population Dynamics with Million-agent Reinforcement Learning

    Get PDF
    We conduct an empirical study on discovering the ordered collective dynamics obtained by a population of intelligence agents, driven by million-agent reinforcement learning. Our intention is to put intelligent agents into a simulated natural context and verify if the principles developed in the real world could also be used in understanding an artificially-created intelligent population. To achieve this, we simulate a large-scale predator-prey world, where the laws of the world are designed by only the findings or logical equivalence that have been discovered in nature. We endow the agents with the intelligence based on deep reinforcement learning (DRL). In order to scale the population size up to millions agents, a large-scale DRL training platform with redesigned experience buffer is proposed. Our results show that the population dynamics of AI agents, driven only by each agent's individual self-interest, reveals an ordered pattern that is similar to the Lotka-Volterra model studied in population biology. We further discover the emergent behaviors of collective adaptations in studying how the agents' grouping behaviors will change with the environmental resources. Both of the two findings could be explained by the self-organization theory in nature.Comment: Full version of the paper presented at AAMAS 2018 (International Conference on Autonomous Agents and Multiagent Systems

    Patterning of ventral telencephalon requires positive function of Gli transcription factors

    Get PDF
    AbstractThe ability of neuroepithelial cells to generate a diverse array of neurons is influenced by locally secreted signals. In the spinal cord, Sonic Hedgehog (Shh) is known to induce distinct cell fates in a concentration-dependent manner by regulating the activities of the three Gli transcription factors in neural precursors. However, whether Gli-mediated Shh signaling is also required to induce different cell types in the ventral telencephalon has been controversial. In particular, loss of Shh has little effect on dorsoventral patterning of the telencephalon when Gli3 is also removed. Furthermore, no ventral telencephalic phenotypes have been found in individual Gli mutants. To address this issue, we first characterized Shh-responding ventral telencephalic progenitors between E9.5 and E12.5 and found that they produce neurons migrating to different layers of the cortex. We also discovered a loss of Nkx2.1 and Nkx6.2 expression in two subgroups of progenitors in embryos lacking major Gli activators. Finally, we analyzed the telencephalic phenotypes of embryos lacking all Gli genes and found that the ventral telencephalon was highly disorganized with intermingling of distinct neuronal cell types. Together, these studies unravel a role for Gli transcription factors in mediating Shh signaling to control specification, differentiation and positioning of ventral telencephalic neurons

    Recombinant amelogenin peptide TRAP promoting remineralization of early enamel caries: An in vitro study

    Get PDF
    Objective: To explore the regulatory effect of recombinant amelogenin peptide TRAP on the remineralization of early enamel carious lesions.Methods: Forty-eight bovine enamel blocks that prepared initial lesions in vitro were split at random into four groups for immersion treatment for 12 days: 1) remineralizing medium; 2) studied peptide 1 (consisting of the N- and C-termini of porcine amelogenin) + remineralizing medium; 3) studied peptide 2 (TRAP) + remineralizing medium; 4) fluoride + remineralizing medium. After demineralization and remineralization immersion, each specimen’s mean mineral loss and lesion depth were measured using micro-computed tomography (micro-CT). The changes in lesion depth (∆LD) and mineral gain (∆Z) were computed following remineralization. The enamel samples were then cut into sections and examined with polarized light microscopy (PLM). The cross-section morphology was observed by scanning electron microscopy (SEM). The crystal phase was analyzed by an X-ray micro-diffractometer (XRD). The calcium-binding properties were determined using isothermal titration calorimetry (ITC).Results: Micro-CT analysis revealed a significant reduction in mineral loss in the four groups following the remineralization treatment (p < 0.05). The treatment with fluoride resulted in the greatest ∆Z and ∆LD, whereas the treatment with a remineralizing medium showed the least ∆Z and ∆LD among all groups. The ∆Z and ∆LD of the studied peptide 1 and studied peptide 2 groups were greater than those of the remineralizing medium group. However, there was no significant difference between the studied peptide 1 and studied peptide 2 groups (p > 0.05). All of the samples that the PLM analyzed had a thickening of the surface layer. A negative birefringent band changed in the lesion’s body. The SEM displayed that minerals were formed in all four groups of samples. The XRD results indicated that the products of remineralization of the studied peptide were hydroxyapatite crystals (HA). ITC showed that there were two binding modes between the calcium and peptide TRAP.Conclusion: This study confirmed the potential of the recombinant amelogenin peptide TRAP as a key functional motif of amelogenin protein for enamel remineralization and provided a promising biomaterial for remineralization in initial enamel carious lesion treatment

    LargeST: A Benchmark Dataset for Large-Scale Traffic Forecasting

    Full text link
    Road traffic forecasting plays a critical role in smart city initiatives and has experienced significant advancements thanks to the power of deep learning in capturing non-linear patterns of traffic data. However, the promising results achieved on current public datasets may not be applicable to practical scenarios due to limitations within these datasets. First, the limited sizes of them may not reflect the real-world scale of traffic networks. Second, the temporal coverage of these datasets is typically short, posing hurdles in studying long-term patterns and acquiring sufficient samples for training deep models. Third, these datasets often lack adequate metadata for sensors, which compromises the reliability and interpretability of the data. To mitigate these limitations, we introduce the LargeST benchmark dataset. It encompasses a total number of 8,600 sensors in California with a 5-year time coverage and includes comprehensive metadata. Using LargeST, we perform in-depth data analysis to extract data insights, benchmark well-known baselines in terms of their performance and efficiency, and identify challenges as well as opportunities for future research. We release the datasets and baseline implementations at: https://github.com/liuxu77/LargeST
    • …
    corecore