47,758 research outputs found

    Rationale in Development Chat Messages: An Exploratory Study

    Full text link
    Chat messages of development teams play an increasingly significant role in software development, having replaced emails in some cases. Chat messages contain information about discussed issues, considered alternatives and argumentation leading to the decisions made during software development. These elements, defined as rationale, are invaluable during software evolution for documenting and reusing development knowledge. Rationale is also essential for coping with changes and for effective maintenance of the software system. However, exploiting the rationale hidden in the chat messages is challenging due to the high volume of unstructured messages covering a wide range of topics. This work presents the results of an exploratory study examining the frequency of rationale in chat messages, the completeness of the available rationale and the potential of automatic techniques for rationale extraction. For this purpose, we apply content analysis and machine learning techniques on more than 8,700 chat messages from three software development projects. Our results show that chat messages are a rich source of rationale and that machine learning is a promising technique for detecting rationale and identifying different rationale elements.Comment: 11 pages, 6 figures. The 14th International Conference on Mining Software Repositories (MSR'17

    Synthesizing diverse evidence: the use of primary qualitative data analysis methods and logic models in public health reviews

    Get PDF
    Objectives: The nature of public health evidence presents challenges for conventional systematic review processes, with increasing recognition of the need to include a broader range of work including observational studies and qualitative research, yet with methods to combine diverse sources remaining underdeveloped. The objective of this paper is to report the application of a new approach for review of evidence in the public health sphere. The method enables a diverse range of evidence types to be synthesized in order to examine potential relationships between a public health environment and outcomes. Study design: The study drew on previous work by the National Institute for Health and Clinical Excellence on conceptual frameworks. It applied and further extended this work to the synthesis of evidence relating to one particular public health area: the enhancement of employee mental well-being in the workplace. Methods: The approach utilized thematic analysis techniques from primary research, together with conceptual modelling, to explore potential relationships between factors and outcomes. Results: The method enabled a logic framework to be built from a diverse document set that illustrates how elements and associations between elements may impact on the well-being of employees. Conclusions: Whilst recognizing potential criticisms of the approach, it is suggested that logic models can be a useful way of examining the complexity of relationships between factors and outcomes in public health, and of highlighting potential areas for interventions and further research. The use of techniques from primary qualitative research may also be helpful in synthesizing diverse document types. (C) 2010 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved

    PDF-Malware Detection: A Survey and Taxonomy of Current Techniques

    Get PDF
    Portable Document Format, more commonly known as PDF, has become, in the last 20 years, a standard for document exchange and dissemination due its portable nature and widespread adoption. The flexibility and power of this format are not only leveraged by benign users, but from hackers as well who have been working to exploit various types of vulnerabilities, overcome security restrictions, and then transform the PDF format in one among the leading malicious code spread vectors. Analyzing the content of malicious PDF files to extract the main features that characterize the malware identity and behavior, is a fundamental task for modern threat intelligence platforms that need to learn how to automatically identify new attacks. This paper surveys existing state of the art about systems for the detection of malicious PDF files and organizes them in a taxonomy that separately considers the used approaches and the data analyzed to detect the presence of malicious code. © Springer International Publishing AG, part of Springer Nature 2018

    United we fall, divided we stand: A study of query segmentation and PRF for patent prior art search

    Get PDF
    Previous research in patent search has shown that reducing queries by extracting a few key terms is ineffective primarily because of the vocabulary mismatch between patent applications used as queries and existing patent documents. This ïŹnding has led to the use of full patent applications as queries in patent prior art search. In addition, standard information retrieval (IR) techniques such as query expansion (QE) do not work effectively with patent queries, principally because of the presence of noise terms in the massive queries. In this study, we take a new approach to QE for patent search. Text segmentation is used to decompose a patent query into selfcoherent sub-topic blocks. Each of these much shorted sub-topic blocks which is representative of a speciïŹc aspect or facet of the invention, is then used as a query to retrieve documents. Documents retrieved using the different resulting sub-queries or query streams are interleaved to construct a ïŹnal ranked list. This technique can exploit the potential beneïŹt of QE since the segmented queries are generally more focused and less ambiguous than the full patent query. Experiments on the CLEF-2010 IP prior-art search task show that the proposed method outperforms the retrieval effectiveness achieved when using a single full patent application text as the query, and also demonstrates the potential beneïŹts of QE to alleviate the vocabulary mismatch problem in patent search

    Integrative Use of Information Extraction, Semantic Matchmaking and Adaptive Coupling Techniques in Support of Distributed Information Processing and Decision-Making

    No full text
    In order to press maximal cognitive benefit from their social, technological and informational environments, military coalitions need to understand how best to exploit available information assets as well as how best to organize their socially-distributed information processing activities. The International Technology Alliance (ITA) program is beginning to address the challenges associated with enhanced cognition in military coalition environments by integrating a variety of research and development efforts. In particular, research in one component of the ITA ('Project 4: Shared Understanding and Information Exploitation') is seeking to develop capabilities that enable military coalitions to better exploit and distribute networked information assets in the service of collective cognitive outcomes (e.g. improved decision-making). In this paper, we provide an overview of the various research activities in Project 4. We also show how these research activities complement one another in terms of supporting coalition-based collective cognition

    RAMESES publication standards: meta-narrative reviews

    Get PDF
    PMCID: PMC3558334This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited

    The DIGMAP geo-temporal web gazetteer service

    Get PDF
    This paper presents the DIGMAP geo-temporal Web gazetteer service, a system providing access to names of places, historical periods, and associated geo-temporal information. Within the DIGMAP project, this gazetteer serves as the unified repository of geographic and temporal information, assisting in the recognition and disambiguation of geo-temporal expressions over text, as well as in resource searching and indexing. We describe the data integration methodology, the handling of temporal information and some of the applications that use the gazetteer. Initial evaluation results show that the proposed system can adequately support several tasks related to geo-temporal information extraction and retrieval

    BlogForever D2.6: Data Extraction Methodology

    Get PDF
    This report outlines an inquiry into the area of web data extraction, conducted within the context of blog preservation. The report reviews theoretical advances and practical developments for implementing data extraction. The inquiry is extended through an experiment that demonstrates the effectiveness and feasibility of implementing some of the suggested approaches. More specifically, the report discusses an approach based on unsupervised machine learning that employs the RSS feeds and HTML representations of blogs. It outlines the possibilities of extracting semantics available in blogs and demonstrates the benefits of exploiting available standards such as microformats and microdata. The report proceeds to propose a methodology for extracting and processing blog data to further inform the design and development of the BlogForever platform
    • 

    corecore