10 research outputs found

    Effective User Experience in Online Technical Communication Courses: Employing Multiple Methods within Organizational Contexts to Assess Usability

    Get PDF
    In teaching online technical communication courses, shaping an electronic interface requires extensive consideration of the user experience, both for students and for faculty members who design and teach the courses. Technical communication faculty members should provide strong examples of effective user experiences and should be leaders in making the interfaces of online learning management systems as usable as possible. Principles of usability designed for general web sites may or may not apply to learning management systems designed for educational purposes. In order to create effective online technical communication courses, one needs to consider both usability concerns and pedagogical concerns. To assess the usability and pedagogical effectiveness of online courses, faculty members may use indirect means such as heuristic analyses. In addition, they may use direct means such as usability testing, student feedback, and analytic tools. Each approach has advantages as well as limitations. Faculty members will gain the richest information through using multiple approaches. In assessing usability and pedagogical effectiveness, faculty members also need to consider the situational constraints and resources in their unique contexts. Understanding and adapting their approaches to use resources well and to work within constraints will benefit their abilities to enhance their student users' experiences with online courses

    A Decade of Code Comment Quality Assessment: A Systematic Literature Review

    Get PDF
    Code comments are important artifacts in software systems and play a paramount role in many software engineering (SE) tasks related to maintenance and program comprehension. However, while it is widely accepted that high quality matters in code comments just as it matters in source code, assessing comment quality in practice is still an open problem. First and foremost, there is no unique definition of quality when it comes to evaluating code comments. The few existing studies on this topic rather focus on specific attributes of quality that can be easily quantified and measured. Existing techniques and corresponding tools may also focus on comments bound to a specific programming language, and may only deal with comments with specific scopes and clear goals (e.g., Javadoc comments at the method level, or in-body comments describing TODOs to be addressed). In this paper, we present a Systematic Literature Review (SLR) of the last decade of research in SE to answer the following research questions: (i) What types of comments do researchers focus on when assessing comment quality? (ii) What quality attributes (QAs) do they consider? (iii) Which tools and techniques do they use to assess comment quality?, and (iv) How do they evaluate their studies on comment quality assessment in general? Our evaluation, based on the analysis of 2353 papers and the actual review of 47 relevant ones, shows that (i) most studies and techniques focus on comments in Java code, thus may not be generalizable to other languages, and (ii) the analyzed studies focus on four main QAs of a total of 21 QAs identified in the literature, with a clear predominance of checking consistency between comments and the code. We observe that researchers rely on manual assessment and specific heuristics rather than the automated assessment of the comment quality attributes

    A decade of code comment quality assessment : a systematic literature review

    Get PDF
    Code comments are important artifacts in software systems and play a paramount role in many software engineering (SE) tasks related to maintenance and program comprehension. However, while it is widely accepted that high quality matters in code comments just as it matters in source code, assessing comment quality in practice is still an open problem. First and foremost, there is no unique definition of quality when it comes to evaluating code comments. The few existing studies on this topic rather focus on specific attributes of quality that can be easily quantified and measured. Existing techniques and corresponding tools may also focus on comments bound to a specific programming language, and may only deal with comments with specific scopes and clear goals (e.g., Javadoc comments at the method level, or in-body comments describing TODOs to be addressed). In this paper, we present a Systematic Literature Review (SLR) of the last decade of research in SE to answer the following research questions: (i) What types of comments do researchers focus on when assessing comment quality? (ii) What quality attributes (QAs) do they consider? (iii) Which tools and techniques do they use to assess comment quality?, and (iv) How do they evaluate their studies on comment quality assessment in general? Our evaluation, based on the analysis of 2353 papers and the actual review of 47 relevant ones, shows that (i) most studies and techniques focus on comments in Java code, thus may not be generalizable to other languages, and (ii) the analyzed studies focus on four main QAs of a total of 21 QAs identified in the literature, with a clear predominance of checking consistency between comments and the code. We observe that researchers rely on manual assessment and specific heuristics rather than the automated assessment of the comment quality attributes, with evaluations often involving surveys of students and the authors of the original studies but rarely professional developers

    A Digital Game Maturity Model

    Get PDF
    Game development is an interdisciplinary concept that embraces artistic, software engineering, management, and business disciplines. Game development is considered as one of the most complex tasks in software engineering. Hence, for successful development of good-quality games, the game developers must consider and explore all related dimensions as well as discussing them with the stakeholders involved. This research facilitates a better understanding of important dimensions of digital game development methodology. The increased popularity of digital games, the challenges faced by game development organizations in developing quality games, and severe competition in the digital game industry demand a game development process maturity assessment. Consequently, this study presents a Digital Game Maturity Model to evaluate the current development methodology in an organization. The objective is first to identify key factors in the game development process, then to classify these factors into target groups, and eventually to use this grouping as a theoretical basis for proposing a maturity model for digital game development. In doing so, the research focuses on three major stakeholders in game development: developers, consumers, and business management. The framework of the proposed model consists of assessment questionnaires made up of key identified factors from three empirical studies, a performance scale, and a rating method. The main goal of the questionnaires is to collect information about current processes and practices. This research contributes towards formulating a comprehensive and unified strategy for game development process maturity assessment. The proposed model was evaluated with two case studies from the digital game industry

    “[Taking] Responsibility for the Community”: Women Claiming Power and Legitimacy in Technical and Professional Communication in India, 1999-2016

    Get PDF
    Though the field of technical and professional communication has long been saturated with the narratives of Euro-Western males, technical and professional communication as a field has a responsibility to expand the lens of study to include the experiences of global and nontraditional practitioners. This study examines the experiences of Indian women working as practitioners, building power and legitimacy in a globalized economy. Drawing from interviews with 49 practitioners as well as an analysis of historical documents, this study examines the methods that Indian practitioners have used to build power and legitimacy by founding professional organizations, leveraging their educational opportunities, and using tactical strategies in their workplaces. The data suggests that Indian women have done strong, innovative work in building their own legitimacy in the field. However, work remains to remove barriers that disproportionately bar women from access to professionalizing structures

    Assessing Comment Quality in Object-Oriented Languages

    Get PDF
    Previous studies have shown that high-quality code comments support developers in software maintenance and program comprehension tasks. However, the semi-structured nature of comments, several conventions to write comments, and the lack of quality assessment tools for all aspects of comments make comment evaluation and maintenance a non-trivial problem. To understand the specification of high-quality comments to build effective assessment tools, our thesis emphasizes acquiring a multi-perspective view of the comments, which can be approached by analyzing (1) the academic support for comment quality assessment, (2) developer commenting practices across languages, and (3) developer concerns about comments. Our findings regarding the academic support for assessing comment quality showed that researchers primarily focus on Java in the last decade even though the trend of using polyglot environments in software projects is increasing. Similarly, the trend of analyzing specific types of code comments (method comments, or inline comments) is increasing, but the studies rarely analyze class comments. We found 21 quality attributes that researchers consider to assess comment quality, and manual assessment is still the most commonly used technique to assess various quality attributes. Our analysis of developer commenting practices showed that developers embed a mixed level of details in class comments, ranging from high-level class overviews to low-level implementation details across programming languages. They follow style guidelines regarding what information to write in class comments but violate the structure and syntax guidelines. They primarily face problems locating relevant guidelines to write consistent and informative comments, verifying the adherence of their comments to the guidelines, and evaluating the overall state of comment quality. To help researchers and developers in building comment quality assessment tools, we contribute: (i) a systematic literature review (SLR) of ten years (2010–2020) of research on assessing comment quality, (ii) a taxonomy of quality attributes used to assess comment quality, (iii) an empirically validated taxonomy of class comment information types from three programming languages, (iv) a multi-programming-language approach to automatically identify the comment information types, (v) an empirically validated taxonomy of comment convention-related questions and recommendation from various Q&A forums, and (vi) a tool to gather discussions from multiple developer sources, such as Stack Overflow, and mailing lists. Our contributions provide various kinds of empirical evidence of the developer’s interest in reducing efforts in the software documentation process, of the limited support developers get in automatically assessing comment quality, and of the challenges they face in writing high-quality comments. This work lays the foundation for future effective comment quality assessment tools and techniques

    Assessing Comment Quality in Object-Oriented Languages

    Get PDF
    Previous studies have shown that high-quality code comments support developers in software maintenance and program comprehension tasks. However, the semi-structured nature of comments, several conventions to write comments, and the lack of quality assessment tools for all aspects of comments make comment evaluation and maintenance a non-trivial problem. To understand the specification of high-quality comments to build effective assessment tools, our thesis emphasizes acquiring a multi-perspective view of the comments, which can be approached by analyzing (1) the academic support for comment quality assessment, (2) developer commenting practices across languages, and (3) developer concerns about comments. Our findings regarding the academic support for assessing comment quality showed that researchers primarily focus on Java in the last decade even though the trend of using polyglot environments in software projects is increasing. Similarly, the trend of analyzing specific types of code comments (method comments, or inline comments) is increasing, but the studies rarely analyze class comments. We found 21 quality attributes that researchers consider to assess comment quality, and manual assessment is still the most commonly used technique to assess various quality attributes. Our analysis of developer commenting practices showed that developers embed a mixed level of details in class comments, ranging from high-level class overviews to low-level implementation details across programming languages. They follow style guidelines regarding what information to write in class comments but violate the structure and syntax guidelines. They primarily face problems locating relevant guidelines to write consistent and informative comments, verifying the adherence of their comments to the guidelines, and evaluating the overall state of comment quality. To help researchers and developers in building comment quality assessment tools, we contribute: (i) a systematic literature review (SLR) of ten years (2010–2020) of research on assessing comment quality, (ii) a taxonomy of quality attributes used to assess comment quality, (iii) an empirically validated taxonomy of class comment information types from three programming languages, (iv) a multi-programming-language approach to automatically identify the comment information types, (v) an empirically validated taxonomy of comment convention-related questions and recommendation from various Q&A forums, and (vi) a tool to gather discussions from multiple developer sources, such as Stack Overflow, and mailing lists. Our contributions provide various kinds of empirical evidence of the developer’s interest in reducing efforts in the software documentation process, of the limited support developers get in automatically assessing comment quality, and of the challenges they face in writing high-quality comments. This work lays the foundation for future effective comment quality assessment tools and techniques

    XXIII Edición del Workshop de Investigadores en Ciencias de la Computación : Libro de actas

    Get PDF
    Compilación de las ponencias presentadas en el XXIII Workshop de Investigadores en Ciencias de la Computación (WICC), llevado a cabo en Chilecito (La Rioja) en abril de 2021.Red de Universidades con Carreras en Informátic
    corecore