1 research outputs found

    Question Oriented Software Text Retrieval

    No full text
    <p>Dataset-1: Question-answer pairs on “Lucene” collected from StackOverflow. As mentioned in paper [36], we first get 5,587 questions and 7,872 answers from the StackOverflow with tag “lucene”, where 1,826 questions with positive votes are kept and labeled. We use these question and their 2,460 answers for original classifier training and testing.</p> <p>Dataset-2: Question-answer pairs on “Java” collected from StackOverflow. We need more data to train the classifier models and evaluate our approach. Then we extend our data collection scope and randomly pick 50,000 questions with tag “Java” on StackOverflow. It may cost too much time if we judge the types of these question accurately and manually. We filter all the questions using regular expressions (e.g. the question includes phrases “how to” , “how can” or “what is the best way to”, etc., are labeled with “how to” tag). Finally, 11,003 questions and the corresponding 16,255 answers are selected. Table IV briefly describes these two datasets.</p> <p>Dataset-3: FAQs of seven well-known open source projects. In software development, FAQs are used by many projects as part of their documentation. Compared with the data from StackOverflow, the FAQs are more formal and accurate. We want to investigate whether our approach is valid in search- ing these questions’ answers and whether the classifiers are affected by our learning examples. Table V illustrates the 7 open source projects and the numbers of their FAQs. All of them are the top level projects (TLPs) in Apache.</p
    corecore