222 research outputs found

    DECODER - DEveloper COmpanion for Documented and annotatEd code Reference

    Full text link
    This work has been developed with the financial support of the European Union's Horizon 2020 research and innovation programme under grant agreement No. 824231Gil Pascual, M.; Pastor-RicĂłs, F.; Torres Bosch, MV.; Vos, TE. (2020). DECODER - DEveloper COmpanion for Documented and annotatEd code Reference. Springer. 643-644. http://hdl.handle.net/10251/178910S64364

    DECODER - DEveloper COmpanion for Documented and annotatEd code Reference

    Full text link
    Software is everywhere and the productivity of Software Engineers has increased radically with the advent of new specifications, design and programming paradigms and languages. The main objective of the DECODER project is to introduce radical solutions to increase productivity by increasing the abstraction level, at specification stage, using requirements engineering techniques to integrate more complete specifications into the development process, and formal methods to reduce the time and efforts for integration testing. DECODER project will develop a methodology and tools to improve the productivity of the software development process for medium-criticality applications in the domains of IoT, Cloud Computing, and Operating Systems by combining Natural Language Processing techniques, modelling techniques and Formal Methods. A radical improvement is expected from the management and transformation of informal data into material (herein called knowledge ) that can be assimilated by any party involved in a development process. The project expects an average benefit of 20% in terms of efforts on several use cases belonging to the beforehand mentioned domains and will provide recommendations on how to generalize the approach to other medium-critical domains.This work has been developed with the financial support of the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 824231 and the Spanish State Research Agency under the project TIN2017-84094-R and co-financed with ERDF.Torres Bosch, MV.; Gil Pascual, M.; Pelechano Ferragud, V. (2019). DECODER - DEveloper COmpanion for Documented and annotatEd code Reference. Springer. 596-601. https://doi.org/10.1007/978-3-030-35333-9_44S59660

    Multi-Document Summarisation from Heterogeneous Software Development Artefacts

    Get PDF
    Software engineers create a vast number of artefacts during project development; activities, consisting of related information exchanged between developers. Sifting a large amount of information available within a project repository can be time-consuming. In this dissertation, we proposed a method for multi-document summarisation from heterogeneous software development artefacts to help software developers by automatically generating summaries to help them target their information needs. To achieve this aim, we first had our gold-standard summaries created; we then characterised them, and used them to identify the main types of software artefacts that describe developers’ activities in GitHub project repositories. This initial step was important for the present study, as we had no prior knowledge about the types of artefacts linked to developers’ activities that could be used as sources of input for our proposed multi-document summarisation techniques. In addition, we used the gold-standard summaries later to evaluate the quality of our summarisation techniques. We then developed extractive-based multi- document summarisation approaches to automatically summarise software development artefacts within a given time frame by integrating techniques from natural language processing, software repository mining, and data-driven search-based software engineering. The generated summaries were then evaluated in a user study to investigate whether experts considered that the generated summaries mentioned every important project activity that appeared in the gold-standard summaries. The results of the user study showed that generating summaries from different kinds of software artefacts is possible, and the generated summaries are useful in describing a project’s development activities over a given time frame. Finally, we investigated the potential of using source code comments for summarisation by assessing the documented information of Java primitive variables in comments against three types of knowledge. Results showed that the source code comments did contain additional information and could be useful for summarisation of developers’ development activities.Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 202

    Semantic Search and Visual Exploration of Computational Notebooks

    Get PDF
    Code search is an important and frequent activity for developers using computational notebooks (e.g., Jupyter). The flexibility of notebooks brings challenges for effective code search, where classic search interfaces for traditional software code may be limited. In this thesis, we propose, NBSearch, a novel system that supports semantic code search in notebook collections and interactive visual exploration of search results. NBSearch leverages advanced machine learning models to enable natural language search queries and intuitive visualizations to present complicated intra- and inter-notebook relationships in the returned results. We developed NBSearch through an iterative participatory design process with two experts from a large software company. We evaluated the models with a series of experiments and the whole system with a controlled user study. The results indicate the feasibility of our analytical pipeline and the effectiveness of NBSearch to support code search in large notebook collections. As one important aspect of the future directions, the search quality of NBSearch was further improved by incorporating the impact of markdowns in notebooks, and its performance was evaluated by comparing to the original implementation

    Flavor text generation for role-playing video games

    Get PDF

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    Data-Driven Methods for Data Center Operations Support

    Get PDF
    During the last decade, cloud technologies have been evolving at an impressive pace, such that we are now living in a cloud-native era where developers can leverage on an unprecedented landscape of (possibly managed) services for orchestration, compute, storage, load-balancing, monitoring, etc. The possibility to have on-demand access to a diverse set of configurable virtualized resources allows for building more elastic, flexible and highly-resilient distributed applications. Behind the scenes, cloud providers sustain the heavy burden of maintaining the underlying infrastructures, consisting in large-scale distributed systems, partitioned and replicated among many geographically dislocated data centers to guarantee scalability, robustness to failures, high availability and low latency. The larger the scale, the more cloud providers have to deal with complex interactions among the various components, such that monitoring, diagnosing and troubleshooting issues become incredibly daunting tasks. To keep up with these challenges, development and operations practices have undergone significant transformations, especially in terms of improving the automations that make releasing new software, and responding to unforeseen issues, faster and sustainable at scale. The resulting paradigm is nowadays referred to as DevOps. However, while such automations can be very sophisticated, traditional DevOps practices fundamentally rely on reactive mechanisms, that typically require careful manual tuning and supervision from human experts. To minimize the risk of outages—and the related costs—it is crucial to provide DevOps teams with suitable tools that can enable a proactive approach to data center operations. This work presents a comprehensive data-driven framework to address the most relevant problems that can be experienced in large-scale distributed cloud infrastructures. These environments are indeed characterized by a very large availability of diverse data, collected at each level of the stack, such as: time-series (e.g., physical host measurements, virtual machine or container metrics, networking components logs, application KPIs); graphs (e.g., network topologies, fault graphs reporting dependencies among hardware and software components, performance issues propagation networks); and text (e.g., source code, system logs, version control system history, code review feedbacks). Such data are also typically updated with relatively high frequency, and subject to distribution drifts caused by continuous configuration changes to the underlying infrastructure. In such a highly dynamic scenario, traditional model-driven approaches alone may be inadequate at capturing the complexity of the interactions among system components. DevOps teams would certainly benefit from having robust data-driven methods to support their decisions based on historical information. For instance, effective anomaly detection capabilities may also help in conducting more precise and efficient root-cause analysis. Also, leveraging on accurate forecasting and intelligent control strategies would improve resource management. Given their ability to deal with high-dimensional, complex data, Deep Learning-based methods are the most straightforward option for the realization of the aforementioned support tools. On the other hand, because of their complexity, this kind of models often requires huge processing power, and suitable hardware, to be operated effectively at scale. These aspects must be carefully addressed when applying such methods in the context of data center operations. Automated operations approaches must be dependable and cost-efficient, not to degrade the services they are built to improve. i

    A Systematic Review of Automated Query Reformulations in Source Code Search

    Full text link
    Fixing software bugs and adding new features are two of the major maintenance tasks. Software bugs and features are reported as change requests. Developers consult these requests and often choose a few keywords from them as an ad hoc query. Then they execute the query with a search engine to find the exact locations within software code that need to be changed. Unfortunately, even experienced developers often fail to choose appropriate queries, which leads to costly trials and errors during a code search. Over the years, many studies attempt to reformulate the ad hoc queries from developers to support them. In this systematic literature review, we carefully select 70 primary studies on query reformulations from 2,970 candidate studies, perform an in-depth qualitative analysis (e.g., Grounded Theory), and then answer seven research questions with major findings. First, to date, eight major methodologies (e.g., term weighting, term co-occurrence analysis, thesaurus lookup) have been adopted to reformulate queries. Second, the existing studies suffer from several major limitations (e.g., lack of generalizability, vocabulary mismatch problem, subjective bias) that might prevent their wide adoption. Finally, we discuss the best practices and future opportunities to advance the state of research in search query reformulations.Comment: 81 pages, accepted at TOSE

    This is not a real image:Generative artificial intelligence to enhance radiology education

    Get PDF
    Radiologists fulfill a critical role in our healthcare system, but their workload has increased substantially over time. Although algorithmic tools have been proposed to support the diagnostic process, the workload is not efficiently decreased in this manner. However, another possibility is to decrease workload in a different area. The main topic of this thesis is concerned with investigating how simulation training can be realized to aid in the image interpretation skills training of the radiology resident. To realize simulated training it is necessary to know (1) how we can create realistic artificial medical images, subsequently (2) How we can control their variety and (3) how we can adjust their difficulty.Firstly, it is shown that artificial medical images can blend in with original ones. For this purpose a GAN model is used to create 2-dimensional artificial medical images. The created artificial images are assessed both quantitatively and qualitatively in terms of their realism. Secondly, to better control the variety of the artificial medical images a diffusion model is used to guide both coarse- and fine-features. The results show that the model was able to adjust fine-feature characteristics of the pathology type according to the feedback of the independent classifier. Thirdly, a method is presented to describe the detection difficulty of an (artificial) medical image using quantitative pathology and image characteristics. Results show that it is possible to describe almost two thirds of the variation in difficulty using these quantitative characteristics and as such describe images as having lower or higher detection difficulty. Finally, the responsible implementation of the medical image simulator to assist in image interpretation skills is investigated. Combining the results of this thesis resulted in a prototype of a 'medical image simulator'. This simulator can take over part of the workload of the supervising radiologists, by providing a means for independent repetitive practice for the resident. The realistic artificial medical images can be varied in terms of their content and their difficulty. This can enable a personalized experience that can enhance training of image interpretation skills and make it more efficient
    • …
    corecore