1,260 research outputs found
Architectural Design Decision Documentation through Reuse of Design Patterns
While design decisions on the application of architectural design patterns involve complex trade-offs between functionality and quality properties, such decisions are often spontaneous, and documentation of decisions and trace links to related artefacts is usually insufficient. The approach proposed in this thesis provides a support to overcome these problems. It combines support for evaluation of design pattern application, and semi-automated documentation of decisions and trace links
The Grind for Good Data: Understanding ML Practitioners' Struggles and Aspirations in Making Good Data
We thought data to be simply given, but reality tells otherwise; it is
costly, situation-dependent, and muddled with dilemmas, constantly requiring
human intervention. The ML community's focus on quality data is increasing in
the same vein, as good data is vital for successful ML systems. Nonetheless,
few works have investigated the dataset builders and the specifics of what they
do and struggle to make good data. In this study, through semi-structured
interviews with 19 ML experts, we present what humans actually do and consider
in each step of the data construction pipeline. We further organize their
struggles under three themes: 1) trade-offs from real-world constraints; 2)
harmonizing assorted data workers for consistency; 3) the necessity of human
intuition and tacit knowledge for processing data. Finally, we discuss why such
struggles are inevitable for good data and what practitioners aspire, toward
providing systematic support for data works
Architectural Design Decision Documentation through Reuse of Design Patterns
The ADMD3 approach presented in this book enchances the architectural design documentation of decision via reuse of design patterns. It combines the support for evaluation of pattern application, semi-automated documentation of decision rationale and trace links. The approach is based on a new kind of design pattern catalogue, whereby usual pattern descriptions are captured together with question annotations to the patterns and information on architectural structure of patterns
Conversational Challenges in AI-Powered Data Science: Obstacles, Needs, and Design Opportunities
Large Language Models (LLMs) are being increasingly employed in data science
for tasks like data preprocessing and analytics. However, data scientists
encounter substantial obstacles when conversing with LLM-powered chatbots and
acting on their suggestions and answers. We conducted a mixed-methods study,
including contextual observations, semi-structured interviews (n=14), and a
survey (n=114), to identify these challenges. Our findings highlight key issues
faced by data scientists, including contextual data retrieval, formulating
prompts for complex tasks, adapting generated code to local environments, and
refining prompts iteratively. Based on these insights, we propose actionable
design recommendations, such as data brushing to support context selection, and
inquisitive feedback loops to improve communications with AI-based assistants
in data-science tools.Comment: 24 pages, 8 figure
Recommended from our members
A systematic mapping study of API usability evaluation methods
An Application Programming Interface (API) provides a programmatic interface to a software component that is often offered publicly and may be used by programmers who are not the API’s original designers. APIs play a key role in software reuse. By reusing high quality components and services, developers can increase their productivity and avoid costly defects. The usability of an API is a qualitative characteristic that evaluates how easy it is to use an API. Recent years have seen a considerable increase in research efforts aiming at evaluating the usability of APIs. An API usability evaluation can identify problem areas and provide recommendations for improving the API. In this systematic mapping study, we focus on 47 primary studies to identify the aim and the method of the API usability studies. We investigate which API usability factors are evaluated, at which phases of API development is the usability of API evaluated and what are the current limitations and open issues in API usability evaluation. We believe that the results of this literature review would be useful for both researchers and industry practitioners interested in investigating the usability of API and new API usability evaluation methods
SAGA: Summarization-Guided Assert Statement Generation
Generating meaningful assert statements is one of the key challenges in
automated test case generation, which requires understanding the intended
functionality of the tested code. Recently, deep learning-based models have
shown promise in improving the performance of assert statement generation.
However, existing models only rely on the test prefixes along with their
corresponding focal methods, yet ignore the developer-written summarization.
Based on our observations, the summarization contents usually express the
intended program behavior or contain parameters that will appear directly in
the assert statement. Such information will help existing models address their
current inability to accurately predict assert statements. This paper presents
a novel summarization-guided approach for automatically generating assert
statements. To derive generic representations for natural language (i.e.,
summarization) and programming language (i.e., test prefixes and focal
methods), we leverage a pre-trained language model as the reference
architecture and fine-tune it on the task of assert statement generation. To
the best of our knowledge, the proposed approach makes the first attempt to
leverage the summarization of focal methods as the guidance for making the
generated assert statements more accurate. We demonstrate the effectiveness of
our approach on two real-world datasets when compared with state-of-the-art
models.Comment: Preprint, to appear in the Journal of Computer Science and Technology
(JCST
On systematic approaches for interpreted information transfer of inspection data from bridge models to structural analysis
In conjunction with the improved methods of monitoring damage and degradation processes, the interest in reliability assessment of reinforced concrete bridges is increasing in recent years. Automated imagebased inspections of the structural surface provide valuable data to extract quantitative information about deteriorations, such as crack patterns. However, the knowledge gain results from processing this information in a structural context, i.e. relating the damage artifacts to building components. This way, transformation to structural analysis is enabled. This approach sets two further requirements: availability of structural bridge information and a standardized storage for interoperability with subsequent analysis tools. Since the involved large datasets are only efficiently processed in an automated manner, the implementation of the complete workflow from damage and building data to structural analysis is targeted in this work. First, domain concepts are derived from the back-end tasks: structural analysis, damage modeling, and life-cycle assessment. The common interoperability format, the Industry Foundation Class (IFC), and processes in these domains are further assessed. The need for usercontrolled interpretation steps is identified and the developed prototype thus allows interaction at subsequent model stages. The latter has the advantage that interpretation steps can be individually separated into either a structural analysis or a damage information model or a combination of both. This approach to damage information processing from the perspective of structural analysis is then validated in different case studies
- …