121 research outputs found

    A decade of code comment quality assessment : a systematic literature review

    Get PDF
    Code comments are important artifacts in software systems and play a paramount role in many software engineering (SE) tasks related to maintenance and program comprehension. However, while it is widely accepted that high quality matters in code comments just as it matters in source code, assessing comment quality in practice is still an open problem. First and foremost, there is no unique definition of quality when it comes to evaluating code comments. The few existing studies on this topic rather focus on specific attributes of quality that can be easily quantified and measured. Existing techniques and corresponding tools may also focus on comments bound to a specific programming language, and may only deal with comments with specific scopes and clear goals (e.g., Javadoc comments at the method level, or in-body comments describing TODOs to be addressed). In this paper, we present a Systematic Literature Review (SLR) of the last decade of research in SE to answer the following research questions: (i) What types of comments do researchers focus on when assessing comment quality? (ii) What quality attributes (QAs) do they consider? (iii) Which tools and techniques do they use to assess comment quality?, and (iv) How do they evaluate their studies on comment quality assessment in general? Our evaluation, based on the analysis of 2353 papers and the actual review of 47 relevant ones, shows that (i) most studies and techniques focus on comments in Java code, thus may not be generalizable to other languages, and (ii) the analyzed studies focus on four main QAs of a total of 21 QAs identified in the literature, with a clear predominance of checking consistency between comments and the code. We observe that researchers rely on manual assessment and specific heuristics rather than the automated assessment of the comment quality attributes, with evaluations often involving surveys of students and the authors of the original studies but rarely professional developers

    Model-based Specification and Analysis of Natural Language Requirements in the Financial Domain

    Get PDF
    Software requirements form an important part of the software development process. In many software projects conducted by companies in the financial sector, analysts specify software requirements using a combination of models and natural language (NL). Neither models nor NL requirements provide a complete picture of the information in the software system, and NL is highly prone to quality issues, such as vagueness, ambiguity, and incompleteness. Poorly written requirements are difficult to communicate and reduce the opportunity to process requirements automatically, particularly the automation of tedious and error-prone tasks, such as deriving acceptance criteria (AC). AC are conditions that a system must meet to be consistent with its requirements and be accepted by its stakeholders. AC are derived by developers and testers from requirement models. To obtain a precise AC, it is necessary to reconcile the information content in NL requirements and the requirement models. In collaboration with an industrial partner from the financial domain, we first systematically developed and evaluated a controlled natural language (CNL) named Rimay to help analysts write functional requirements. We then proposed an approach that detects common syntactic and semantic errors in NL requirements. Our approach suggests Rimay patterns to fix errors and convert NL requirements into Rimay requirements. Based on our results, we propose a semiautomated approach that reconciles the content in the NL requirements with that in the requirement models. Our approach helps modelers enrich their models with information extracted from NL requirements. Finally, an existing test-specification derivation technique was applied to the enriched model to generate AC. The first contribution of this dissertation is a qualitative methodology that can be used to systematically define a CNL for specifying functional requirements. This methodology was used to create Rimay, a CNL grammar, to specify functional requirements. This CNL was derived after an extensive qualitative analysis of a large number of industrial requirements and by following a systematic process using lexical resources. An empirical evaluation of our CNL (Rimay) in a realistic setting through an industrial case study demonstrated that 88% of the requirements used in our empirical evaluation were successfully rephrased using Rimay. The second contribution of this dissertation is an automated approach that detects syntactic and semantic errors in unstructured NL requirements. We refer to these errors as smells. To this end, we first proposed a set of 10 common smells found in the NL requirements of financial applications. We then derived a set of 10 Rimay patterns as a suggestion to fix the smells. Finally, we developed an automatic approach that analyzes the syntax and semantics of NL requirements to detect any present smells and then suggests a Rimay pattern to fix the smell. We evaluated our approach using an industrial case study that obtained promising results for detecting smells in NL requirements (precision 88%) and for suggesting Rimay patterns (precision 89%). The last contribution of this dissertation was prompted by the observation that a reconciliation of the information content in the NL requirements and the associated models is necessary to obtain precise AC. To achieve this, we define a set of 13 information extraction rules that automatically extract AC-related information from NL requirements written in Rimay. Next, we propose a systematic method that generates recommendations for model enrichment based on the information extracted from the 13 extraction rules. Using a real case study from the financial domain, we evaluated the usefulness of the AC-related model enrichments recommended by our approach. The domain experts found that 89% of the recommended enrichments were relevant to AC, but absent from the original model (precision of 89%)

    Indoor Positioning and Navigation

    Get PDF
    In recent years, rapid development in robotics, mobile, and communication technologies has encouraged many studies in the field of localization and navigation in indoor environments. An accurate localization system that can operate in an indoor environment has considerable practical value, because it can be built into autonomous mobile systems or a personal navigation system on a smartphone for guiding people through airports, shopping malls, museums and other public institutions, etc. Such a system would be particularly useful for blind people. Modern smartphones are equipped with numerous sensors (such as inertial sensors, cameras, and barometers) and communication modules (such as WiFi, Bluetooth, NFC, LTE/5G, and UWB capabilities), which enable the implementation of various localization algorithms, namely, visual localization, inertial navigation system, and radio localization. For the mapping of indoor environments and localization of autonomous mobile sysems, LIDAR sensors are also frequently used in addition to smartphone sensors. Visual localization and inertial navigation systems are sensitive to external disturbances; therefore, sensor fusion approaches can be used for the implementation of robust localization algorithms. These have to be optimized in order to be computationally efficient, which is essential for real-time processing and low energy consumption on a smartphone or robot

    Software Engineering 2021 : Fachtagung vom 22.-26. Februar 2021 Braunschweig/virtuell

    Get PDF

    Examining Introductory Computer Science Student Cognition When Testing Software Under Different Test Adequacy Criteria

    Get PDF
    The ability to test software is invaluable in all areas of computer science, but it is often neglected in computer science curricula. Test adequacy criteria (TAC), tools that measure the effectiveness of a test suite, have been used as aids to improve software testing teaching practices, but little is known about how students respond to them. Studies have examined the cognitive processes of students programming and professional developers writing tests, but none have investigated how student testers test with TAC. If we are to improve how they are used in the classroom, we must start by understanding the different ways that they affect students’ thought processes as they write tests. In this thesis, we take a grounded theory approach to reveal the underlying cognitive processes that students utilize as they test under no feedback, condition coverage, and mutation analysis. We recorded 12 students as they thought aloud while creating test suites under these feedback mechanisms, and then we analyzed these recordings to identify the thought processes they used. We present our findings in the form of the phenomena we identified, which can be further investigated to shed more light on how different TAC affect students as they write tests.

    A Decade of Code Comment Quality Assessment: A Systematic Literature Review

    Get PDF
    Code comments are important artifacts in software systems and play a paramount role in many software engineering (SE) tasks related to maintenance and program comprehension. However, while it is widely accepted that high quality matters in code comments just as it matters in source code, assessing comment quality in practice is still an open problem. First and foremost, there is no unique definition of quality when it comes to evaluating code comments. The few existing studies on this topic rather focus on specific attributes of quality that can be easily quantified and measured. Existing techniques and corresponding tools may also focus on comments bound to a specific programming language, and may only deal with comments with specific scopes and clear goals (e.g., Javadoc comments at the method level, or in-body comments describing TODOs to be addressed). In this paper, we present a Systematic Literature Review (SLR) of the last decade of research in SE to answer the following research questions: (i) What types of comments do researchers focus on when assessing comment quality? (ii) What quality attributes (QAs) do they consider? (iii) Which tools and techniques do they use to assess comment quality?, and (iv) How do they evaluate their studies on comment quality assessment in general? Our evaluation, based on the analysis of 2353 papers and the actual review of 47 relevant ones, shows that (i) most studies and techniques focus on comments in Java code, thus may not be generalizable to other languages, and (ii) the analyzed studies focus on four main QAs of a total of 21 QAs identified in the literature, with a clear predominance of checking consistency between comments and the code. We observe that researchers rely on manual assessment and specific heuristics rather than the automated assessment of the comment quality attributes

    Security and Privacy of IP-ICN Coexistence: A Comprehensive Survey

    Full text link
    Internet usage has changed from its first design. Hence, the current Internet must cope with some limitations, including performance degradation, availability of IP addresses, and multiple security and privacy issues. Nevertheless, to unsettle the current Internet's network layer i.e., Internet Protocol with ICN is a challenging, expensive task. It also requires worldwide coordination among Internet Service Providers , backbone, and Autonomous Services. Additionally, history showed that technology changes e.g., from 3G to 4G, from IPv4 to IPv6 are not immediate, and usually, the replacement includes a long coexistence period between the old and new technology. Similarly, we believe that the process of replacement of the current Internet will surely transition through the coexistence of IP and ICN. Although the tremendous amount of security and privacy issues of the current Internet taught us the importance of securely designing the architectures, only a few of the proposed architectures place the security-by-design. Therefore, this article aims to provide the first comprehensive Security and Privacy analysis of the state-of-the-art coexistence architectures. Additionally, it yields a horizontal comparison of security and privacy among three deployment approaches of IP and ICN protocol i.e., overlay, underlay, and hybrid and a vertical comparison among ten considered security and privacy features. As a result of our analysis, emerges that most of the architectures utterly fail to provide several SP features including data and traffic flow confidentiality, availability and communication anonymity. We believe this article draws a picture of the secure combination of current and future protocol stacks during the coexistence phase that the Internet will definitely walk across

    Reviews and Perspectives on Smart and Sustainable Metropolitan and Regional Cities

    Get PDF
    The notion of smart and sustainable cities offers an integrated and holistic approach to urbanism by aiming to achieve the long-term goals of urban sustainability and resilience. In essence, a smart and sustainable city is an urban locality that functions as a robust system of systems with sustainable practices to generate desired outcomes and futures for all humans and non-humans. This book contributes to improving research and practice in smart and sustainable metropolitan as well as regional cities and urbanism by bringing together literature reviews and scholarly perspective pieces, forming an open access knowledge warehouse. It contains contributions that offer insights into research and practice in smart and sustainable metropolitan and regional cities by producing in-depth conceptual debates and perspectives, insights from the literature and best practice, and thoroughly identified research themes and development trends. This book serves as a repository of relevant information, material, and knowledge to support research, policymaking, practice, and the transferability of experiences to address challenges in establishing smart and sustainable metropolitan as well as regional cities and urbanism in the era of climate change, biodiversity collapse, natural disasters, pandemics, and socioeconomic inequalities

    Edge/Fog Computing Technologies for IoT Infrastructure

    Get PDF
    The prevalence of smart devices and cloud computing has led to an explosion in the amount of data generated by IoT devices. Moreover, emerging IoT applications, such as augmented and virtual reality (AR/VR), intelligent transportation systems, and smart factories require ultra-low latency for data communication and processing. Fog/edge computing is a new computing paradigm where fully distributed fog/edge nodes located nearby end devices provide computing resources. By analyzing, filtering, and processing at local fog/edge resources instead of transferring tremendous data to the centralized cloud servers, fog/edge computing can reduce the processing delay and network traffic significantly. With these advantages, fog/edge computing is expected to be one of the key enabling technologies for building the IoT infrastructure. Aiming to explore the recent research and development on fog/edge computing technologies for building an IoT infrastructure, this book collected 10 articles. The selected articles cover diverse topics such as resource management, service provisioning, task offloading and scheduling, container orchestration, and security on edge/fog computing infrastructure, which can help to grasp recent trends, as well as state-of-the-art algorithms of fog/edge computing technologies
    • …
    corecore