9,190 research outputs found

    Choosing a Suitable Requirement Prioritization Method: A Survey

    Full text link
    Software requirements prioritization plays a crucial role in software development. It can be viewed as the process of ordering requirements by determining which requirements must be done first and which can be done later. Powerful requirements prioritization techniques are of paramount importance to finish the implementation on time and within budget. Many factors affect requirement prioritization such as stakeholder expectations, complexity, dependency, scalability, risk, and cost. Therefore, finding the proper order of requirements is a challenging process. Hence, different types of requirements prioritization techniques have been developed to support this task. In this survey, we propose a novel classification that can classify the prioritization techniques under two major classes: relative and exact prioritization techniques class, where each class is divided into two subclasses. We depend in our classification on the way the value of ranking is given to the requirement, either explicitly as a specific value in the case of the exact prioritization techniques class, or implicitly in the case of the Relative prioritization technique class. An overview of fifteen different requirements prioritization techniques are presented and organized according to the proposed classification criteria's. Moreover, we make a comparison between methods that are related to the same subclass to analyze their strengths and weaknesses. Based on the comparison results, the properties for each proposed subclass of techniques are identified. Depending on these properties, we present some recommendations to help project managers in the process of selecting the most suitable technique to prioritize requirements based on their project characteristics (number of requirements, time, cost, and accuracy)

    Happy software developers solve problems better: psychological measurements in empirical software engineering

    Full text link
    For more than 30 years, it has been claimed that a way to improve software developers' productivity and software quality is to focus on people and to provide incentives to make developers satisfied and happy. This claim has rarely been verified in software engineering research, which faces an additional challenge in comparison to more traditional engineering fields: software development is an intellectual activity and is dominated by often-neglected human aspects. Among the skills required for software development, developers must possess high analytical problem-solving skills and creativity for the software construction process. According to psychology research, affects-emotions and moods-deeply influence the cognitive processing abilities and performance of workers, including creativity and analytical problem solving. Nonetheless, little research has investigated the correlation between the affective states, creativity, and analytical problem-solving performance of programmers. This article echoes the call to employ psychological measurements in software engineering research. We report a study with 42 participants to investigate the relationship between the affective states, creativity, and analytical problem-solving skills of software developers. The results offer support for the claim that happy developers are indeed better problem solvers in terms of their analytical abilities. The following contributions are made by this study: (1) providing a better understanding of the impact of affective states on the creativity and analytical problem-solving capacities of developers, (2) introducing and validating psychological measurements, theories, and concepts of affective states, creativity, and analytical-problem-solving skills in empirical software engineering, and (3) raising the need for studying the human factors of software engineering by employing a multidisciplinary viewpoint.Comment: 33 pages, 11 figures, published at Peer

    Improvement of Work Process Performance with Task Assignments and Mental Workload Balancing

    Get PDF
    The outcome of a work process depends heavily on which tasks assigned to which employees. However, sometimes-optimized assignments based on employees’ qualifications may result in an uneven and ineffective workload distribution among them. Likewise, an even workload distribution without considering the employee\u27s qualifications may cause unproductive employee-task matching that results in low performance of employees. This trade-off is even more noticeable for work processes during critical time junctions, such as in military command centers and emergency rooms that require being fast and effective without making errors. This study proposes that optimizing task-employee assignments according to their capabilities while also keeping them under a workload threshold, results in better performance for work processes, especially during critical time junctions. The goal is to select the employee-task assignments in order to minimize the average duration of a work process while keeping the employees under a workload threshold to prevent errors caused by overload. Due to uncertainties inherent in the problem related with the inter-arrival time of work orders, task durations and employees\u27 instantaneous workload, a utilized simulation-optimization approach solves this problem. More specifically, a discrete event human performance simulation model evaluates the objective function of the problem coupled with a genetic algorithm based meta-heuristic optimization approach to search the solution space. This approach proved to be useful in determining the right task-agent assignments by taking into consideration the employees\u27 qualifications and mental workload in order to minimize the average duration of a work process. Use of a sample work process shows the effectiveness of the developed simulation-optimization approach. Numerical tests indicate that the proposed approach finds better solutions than common practices and other simulation-optimization methods. Accordingly, by using this method, organizations can increase performance, manage excess-level workloads, and generate higher satisfactory environments for employees, without modifying the structure of the process itself

    Optimizing regression testing with AHP-TOPSIS metric system for effective technical debt evaluation

    Get PDF
    Regression testing is essential to ensure that the actual software product confirms the expected requirements following modification. However, it can be costly and time-consuming. To address this issue, various approaches have been proposed for selecting test cases that provide adequate coverage of the modified software. Nonetheless, problems related to omitting and/or rerunning unnecessary test cases continue to pose challenges, particularly with regard to technical debt (TD) resulting from code coverage shortcomings and/or overtesting. In the case of testing-related shortcomings, incurring TD may result in cost and time savings in the short run, but it can lead to future maintenance and testing expenses. Most prior studies have treated test case selection as a single-objective or two-objective optimization problem. This study introduces a multi-objective decision-making approach to quantify and evaluate TD in regression testing. The proposed approach combines the analytic-hierarchy-process (AHP) method and the technique of order preference by similarity to an ideal solution (TOPSIS) to select the most ideal test cases in terms of objective values defined by the test cost, code coverage, and test risk. This approach effectively manages the software regression testing problems. The AHP method was used to eliminate subjective bias when optimizing objective weights, while the TOPSIS method was employed to evaluate and select test-case alternatives based on TD. The effectiveness of this approach was compared to that of a specific multi-objective optimization method and a standard coverage methodology. Unlike other approaches, our proposed approach always accepts solutions based on balanced decisions by considering modifications and using risk analysis and testing costs against potential technical debt. The results demonstrate that our proposed approach reduces both TD and regression testing efforts

    UNDERSTANDING USER PERCEPTIONS AND PREFERENCES FOR MASS-MARKET INFORMATION SYSTEMS – LEVERAGING MARKET RESEARCH TECHNIQUES AND EXAMPLES IN PRIVACY-AWARE DESIGN

    Get PDF
    With cloud and mobile computing, a new category of software products emerges as mass-market information systems (IS) that addresses distributed and heterogeneous end-users. Understanding user requirements and the factors that drive user adoption are crucial for successful design of such systems. IS research has suggested several theories and models to explain user adoption and intentions to use, among them the IS Success Model and the Technology Acceptance Model (TAM). Although these approaches contribute to theoretical understanding of the adoption and use of IS in mass-markets, they are criticized for not being able to drive actionable insights on IS design as they consider the IT artifact as a black-box (i.e., they do not sufficiently address the system internal characteristics). We argue that IS needs to embrace market research techniques to understand and empirically assess user preferences and perceptions in order to integrate the "voice of the customer" in a mass-market scenario. More specifically, conjoint analysis (CA), from market research, can add user preference measurements for designing high-utility IS. CA has gained popularity in IS research, however little guidance is provided for its application in the domain. We aim at supporting the design of mass-market IS by establishing a reliable understanding of consumer’s preferences for multiple factors combing functional, non-functional and economic aspects. The results include a “Framework for Conjoint Analysis Studies in IS” and methodological guidance for applying CA. We apply our findings to the privacy-aware design of mass-market IS and evaluate their implications on user adoption. We contribute to both academia and practice. For academia, we contribute to a more nuanced conceptualization of the IT artifact (i.e., system) through a feature-oriented lens and a preference-based approach. We provide methodological guidelines that support researchers in studying user perceptions and preferences for design variations and extending that to adoption. Moreover, the empirical studies for privacy- aware design contribute to a better understanding of the domain specific applications of CA for IS design and evaluation with a nuanced assessment of user preferences for privacy-preserving features. For practice, we propose guidelines for integrating the voice of the customer for successful IS design. -- Les technologies cloud et mobiles ont fait Ă©merger une nouvelle catĂ©gorie de produits informatiques qui s’adressent Ă  des utilisateurs hĂ©tĂ©rogĂšnes par le biais de systĂšmes d'information (SI) distribuĂ©s. Les termes “SI de masse” sont employĂ©s pour dĂ©signer ces nouveaux systĂšmes. Une conception rĂ©ussie de ceux-ci passe par une phase essentielle de comprĂ©hension des besoins et des facteurs d'adoption des utilisateurs. Pour ce faire, la recherche en SI suggĂšre plusieurs thĂ©ories et modĂšles tels que le “IS Success Model” et le “Technology Acceptance Model”. Bien que ces approches contribuent Ă  la comprĂ©hension thĂ©orique de l'adoption et de l'utilisation des SI de masse, elles sont critiquĂ©es pour ne pas ĂȘtre en mesure de fournir des informations exploitables sur la conception de SI car elles considĂšrent l'artefact informatique comme une boĂźte noire. En d’autres termes, ces approches ne traitent pas suffisamment des caractĂ©ristiques internes du systĂšme. Nous soutenons que la recherche en SI doit adopter des techniques d'Ă©tude de marchĂ© afin de mieux intĂ©grer les exigences du client (“Voice of Customer”) dans un scĂ©nario de marchĂ© de masse. Plus prĂ©cisĂ©ment, l'analyse conjointe (AC), issue de la recherche sur les consommateurs, peut contribuer au dĂ©veloppement de systĂšme SI Ă  forte valeur d'usage. Si l’AC a gagnĂ© en popularitĂ© au sein de la recherche en SI, des recommandations quant Ă  son utilisation dans ce domaine restent rares. Nous entendons soutenir la conception de SI de masse en facilitant une identification fiable des prĂ©fĂ©rences des consommateurs sur de multiples facteurs combinant des aspects fonctionnels, non-fonctionnels et Ă©conomiques. Les rĂ©sultats comprennent un “Cadre de rĂ©fĂ©rence pour les Ă©tudes d'analyse conjointe en SI” et des recommandations mĂ©thodologiques pour l'application de l’AC. Nous avons utilisĂ© ces contributions pour concevoir un SI de masse particuliĂšrement sensible au respect de la vie privĂ©e des utilisateurs et nous avons Ă©valuĂ© l’impact de nos recherches sur l'adoption de ce systĂšme par ses utilisateurs. Ainsi, notre travail contribue tant Ă  la thĂ©orie qu’à la pratique des SI. Pour le monde universitaire, nous contribuons en proposant une conceptualisation plus nuancĂ©e de l'artefact informatique (c'est-Ă -dire du systĂšme) Ă  travers le prisme des fonctionnalitĂ©s et par une approche basĂ©e sur les prĂ©fĂ©rences utilisateurs. Par ailleurs, les chercheurs peuvent Ă©galement s'appuyer sur nos directives mĂ©thodologiques pour Ă©tudier les perceptions et les prĂ©fĂ©rences des utilisateurs pour diffĂ©rentes variations de conception et Ă©tendre cela Ă  l'adoption. De plus, nos Ă©tudes empiriques sur la conception d’un SI de masse sensible au respect de la vie privĂ©e des utilisateurs contribuent Ă  une meilleure comprĂ©hension de l’application des techniques CA dans ce domaine spĂ©cifique. Nos Ă©tudes incluent notamment une Ă©valuation nuancĂ©e des prĂ©fĂ©rences des utilisateurs sur des fonctionnalitĂ©s de protection de la vie privĂ©e. Pour les praticiens, nous proposons des lignes directrices qui permettent d’intĂ©grer les exigences des clients afin de concevoir un SI rĂ©ussi

    ANALYSIS OF ALTERNATIVE MANUFACTURING PROCESSES FOR LIGHTWEIGHT BIW DESIGNS, USING ANALYTICAL HIERARCHY PROCESS

    Get PDF
    The main objective of the analysis was to investigate the forming of Body in White (BIW) panels using alternative processes most suitable for replacing the conventional press working process in order to achieve a reduction in the total mass of the vehicle body structure. The selection of the alternatives was guided by multi criteria decision making tool, the Analytic Hierarchy Process (AHP). Here the alternatives were selected based on their relative importance to the different manufacturing attributes considered. The selected processes were applied to the manufacturing of different parts of BIW indicated in the BOM along with suggestion of the appropriate material to be used

    A Fuzzy AHP Model in Risk Ranking

    Get PDF
    The signification risks associated with construction projects need special attention from contractors to analyze and mange the risks. Risk management is the art and science of identifying, analyzing and responding to risk factors throughout the life cycle of the project and in the best interest of its objectives. In proposed model, we firstly identify risks in the construction projects and suitable criteria for evaluate risks and then structure the proposed AHP model. Finally we measure the significant risks in construction projects (SRCP) based on the project’s objectives by using fuzzy analytical hierarchy process (FAHP) technique. Keyword: Construction projects, Project Risk Management, Fuzzy AH

    A Comparison Analysis between the Standards Used in the Dneiper River Basin Clean-up and European Union Legislation

    Get PDF
    A recent case study involved the clean-up efforts of the Dnieper River Basin by three countries, Belarus, Russia, and Ukraine. The objective of the study was to provide a method for the identification, assessment, and prioritization of the most significant sources of pollution based on their impacts and characteristics. Herein, the standards employed in the Dnieper case study are comparatively analyzed against the relevant EU directives. The purpose in doing so was to determine if the standards employed in this project could serve as a benchmark for the necessary environmental regulations that would be required if these three countries were admitted into the European Union. The main discrepancies found between the standards of the Dnieper case study and the EU directive were differing measuring standards and the vagueness associated with various standards in the case study

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial
    • 

    corecore