6,615 research outputs found

    VGM-RNN: Recurrent Neural Networks for Video Game Music Generation

    Get PDF
    The recent explosion of interest in deep neural networks has affected and in some cases reinvigorated work in fields as diverse as natural language processing, image recognition, speech recognition and many more. For sequence learning tasks, recurrent neural networks and in particular LSTM-based networks have shown promising results. Recently there has been interest – for example in the research by Google’s Magenta team – in applying so-called “language modeling” recurrent neural networks to musical tasks, including for the automatic generation of original music. In this work we demonstrate our own LSTM-based music language modeling recurrent network. We show that it is able to learn musical features from a MIDI dataset and generate output that is musically interesting while demonstrating features of melody, harmony and rhythm. We source our dataset from VGMusic.com, a collection of user-submitted MIDI transcriptions of video game songs, and attempt to generate output which emulates this kind of music

    A Cross-Repository Model for Predicting Popularity in GitHub

    Full text link
    Social coding platforms, such as GitHub, can serve as natural laboratories for studying the diffusion of innovation through tracking the pattern of code adoption by programmers. This paper focuses on the problem of predicting the popularity of software repositories over time; our aim is to forecast the time series of popularity-related events (code forks and watches). In particular, we are interested in cross-repository patterns-how do events on one repository affect other repositories? Our proposed LSTM (Long Short-Term Memory) recurrent neural network integrates events across multiple active repositories, outperforming a standard ARIMA (Auto-Regressive Integrated Moving Average) time series prediction based on the single repository. The ability of the LSTM to leverage cross-repository information gives it a significant edge over standard time series forecasting.Comment: 6 page

    A Neural Model for Generating Natural Language Summaries of Program Subroutines

    Full text link
    Source code summarization -- creating natural language descriptions of source code behavior -- is a rapidly-growing research topic with applications to automatic documentation generation, program comprehension, and software maintenance. Traditional techniques relied on heuristics and templates built manually by human experts. Recently, data-driven approaches based on neural machine translation have largely overtaken template-based systems. But nearly all of these techniques rely almost entirely on programs having good internal documentation; without clear identifier names, the models fail to create good summaries. In this paper, we present a neural model that combines words from code with code structure from an AST. Unlike previous approaches, our model processes each data source as a separate input, which allows the model to learn code structure independent of the text in code. This process helps our approach provide coherent summaries in many cases even when zero internal documentation is provided. We evaluate our technique with a dataset we created from 2.1m Java methods. We find improvement over two baseline techniques from SE literature and one from NLP literature
    • …
    corecore