105 research outputs found

    Towards making functional size measurement easily usable in practice

    Get PDF
    Functional Size Measurement methods \u2013like the IFPUG Function Point Analysis and COSMIC methods\u2013 are widely used to quantify the size of applications. However, the measurement process is often too long or too expensive, or it requires more knowledge than available when development effort estimates are due. To overcome these problems, simplified measurement methods have been proposed. This research explores easily usable functional size measurement method, aiming to improve efficiency, reduce difficulty and cost, and make functional size measurement widely adopted in practice. The first stage of the research involved the study of functional size measurement methods (in particular Function Point Analysis and COSMIC), simplified methods, and measurement based on measurement-oriented models. Then, we modeled a set of applications in a measurement-oriented way, and obtained UML models suitable for functional size measurement. From these UML models we derived both functional size measures and object-oriented measures. Using these measures it was possible to: 1) Evaluate existing simplified functional size measurement methods and derive our own simplified model. 2) Explore whether simplified method can be used in various stages of modeling and evaluate their accuracy. 3) Analyze the relationship between functional size measures and object oriented measures. In addition, the conversion between FPA and COSMIC was studied as an alternative simplified functional size measurement process. Our research revealed that: 1) In general it is possible to size software via simplified measurement processes with acceptable accuracy. In particular, the simplification of the measurement process allows the measurer to skip the function weighting phases, which are usually expensive, since they require a thorough analysis of the details of both data and operations. The models obtained from out dataset yielded results that are similar to those reported in the literature. All simplified measurement methods that use predefined weights for all the transaction and data types identified in Function Point Analysis provided similar results, characterized by acceptable accuracy. On the contrary, methods that rely on just one of the elements that contribute to functional size tend to be quite inaccurate. In general, different methods showed different accuracy for Real-Time and non Real-Time applications. 2) It is possible to write progressively more detailed and complete UML models of user requirements that provide the data required by the simplified COSMIC methods. These models yield progressively more accurate measures of the modeled software. Initial measures are based on simple models and are obtained quickly and with little effort. As V models grow in completeness and detail, the measures increase their accuracy. Developers that use UML for requirements modeling can obtain early estimates of the applications\u2018 sizes at the beginning of the development process, when only very simple UML models have been built for the applications, and can obtain increasingly more accurate size estimates while the knowledge of the products increases and UML models are refined accordingly. 3) Both Function Point Analysis and COSMIC functional size measures appear correlated to object-oriented measures. In particular, associations with basic object- oriented measures were found: Function Points appear associated with the number of classes, the number of attributes and the number of methods; CFP appear associated with the number of attributes. This result suggests that even a very basic UML model, like a class diagram, can support size measures that appear equivalent to functional size measures (which are much harder to obtain). Actually, object-oriented measures can be obtained automatically from models, thus dramatically decreasing the measurement effort, in comparison with functional size measurement. In addition, we proposed conversion method between Function Points and COSMIC based on analytical criteria. Our research has expanded the knowledge on how to simplify the methods for measuring the functional size of the software, i.e., the measure of functional user requirements. Basides providing information immediately usable by developers, the researchalso presents examples of analysis that can be replicated by other researchers, to increase the reliability and generality of the results

    Towards making functional size measurement easily usable in practice

    Get PDF
    Functional Size Measurement methods –like the IFPUG Function Point Analysis and COSMIC methods– are widely used to quantify the size of applications. However, the measurement process is often too long or too expensive, or it requires more knowledge than available when development effort estimates are due. To overcome these problems, simplified measurement methods have been proposed. This research explores easily usable functional size measurement method, aiming to improve efficiency, reduce difficulty and cost, and make functional size measurement widely adopted in practice. The first stage of the research involved the study of functional size measurement methods (in particular Function Point Analysis and COSMIC), simplified methods, and measurement based on measurement-oriented models. Then, we modeled a set of applications in a measurement-oriented way, and obtained UML models suitable for functional size measurement. From these UML models we derived both functional size measures and object-oriented measures. Using these measures it was possible to: 1) Evaluate existing simplified functional size measurement methods and derive our own simplified model. 2) Explore whether simplified method can be used in various stages of modeling and evaluate their accuracy. 3) Analyze the relationship between functional size measures and object oriented measures. In addition, the conversion between FPA and COSMIC was studied as an alternative simplified functional size measurement process. Our research revealed that: 1) In general it is possible to size software via simplified measurement processes with acceptable accuracy. In particular, the simplification of the measurement process allows the measurer to skip the function weighting phases, which are usually expensive, since they require a thorough analysis of the details of both data and operations. The models obtained from out dataset yielded results that are similar to those reported in the literature. All simplified measurement methods that use predefined weights for all the transaction and data types identified in Function Point Analysis provided similar results, characterized by acceptable accuracy. On the contrary, methods that rely on just one of the elements that contribute to functional size tend to be quite inaccurate. In general, different methods showed different accuracy for Real-Time and non Real-Time applications. 2) It is possible to write progressively more detailed and complete UML models of user requirements that provide the data required by the simplified COSMIC methods. These models yield progressively more accurate measures of the modeled software. Initial measures are based on simple models and are obtained quickly and with little effort. As V models grow in completeness and detail, the measures increase their accuracy. Developers that use UML for requirements modeling can obtain early estimates of the applications‘ sizes at the beginning of the development process, when only very simple UML models have been built for the applications, and can obtain increasingly more accurate size estimates while the knowledge of the products increases and UML models are refined accordingly. 3) Both Function Point Analysis and COSMIC functional size measures appear correlated to object-oriented measures. In particular, associations with basic object- oriented measures were found: Function Points appear associated with the number of classes, the number of attributes and the number of methods; CFP appear associated with the number of attributes. This result suggests that even a very basic UML model, like a class diagram, can support size measures that appear equivalent to functional size measures (which are much harder to obtain). Actually, object-oriented measures can be obtained automatically from models, thus dramatically decreasing the measurement effort, in comparison with functional size measurement. In addition, we proposed conversion method between Function Points and COSMIC based on analytical criteria. Our research has expanded the knowledge on how to simplify the methods for measuring the functional size of the software, i.e., the measure of functional user requirements. Basides providing information immediately usable by developers, the researchalso presents examples of analysis that can be replicated by other researchers, to increase the reliability and generality of the results

    Development of a scaling factors framework to improve the approximation of software functional size with COSMIC - ISO 19761

    Get PDF
    De nombreuses organisations de développement de logiciels s’efforcent de fournir des produits de haute qualité tout en gardant un équilibre entre la satisfaction du client, le calendrier et le budget. L'estimation de l'effort de développement des projets logiciel est l'un des défis majeurs de ces organisations de développement et ce défi est généralement rencontré dès les premières phases du cycle de vie du développement. Pour relever ce défi, les organisations de développement de logiciels utilisent des techniques d'estimation précoce pour obtenir des estimations de l'effort au début (c.-à-d. estimations a priori) afin d'aider les gestionnaires de projet et les responsables techniques dans la planification et la gestion des projets. L'une des approches pour l’estimation de l'effort a priori est basée sur l'approximation des fonctions attendues du logiciel. Ceci nécessite l'utilisation d'une méthode de mesure pour quantifier ces fonctions: la littérature réfère à la mesure de la taille fonctionnelle des produits logiciels - incluant les applications d'entreprise. Différentes normes internationales ont été adoptées pour mesurer la taille fonctionnelle des logiciels, telle que ISO 19761: COSMIC. Cependant, durant les premières phases du cycle de vie du développement logiciel, et plus spécifiquement dans le processus d’estimation de la taille fonctionnelle du logiciel, l'absence de spécifications complètes et détaillées des exigences logicielles est commune, ce qui entraîne de nombreux défis. Par exemple: le niveau de granularité (c.-à-d. le niveau de détail) de la spécification des exigences fonctionnelles du logiciel est identifié subjectivement en utilisant l'intuition, l'expérience et/ou les opinions des experts du domaine; les facteurs d'échelle ne sont pas attribués; il n’y a pas une notation standardisée pour définir un ensemble standard de facteurs d'échelle que les ingénieurs des exigences peuvent affecter aux spécifications des exigences fonctionnelles des nouveaux projets de développement de logiciels afin d'identifier leur niveau de granularité. Ces défis affectent l’estimation de la taille fonctionnelle de nouveaux projets de développement de logiciels puisque le résultat de l’estimation de la taille fonctionnelle est l'une des entrées principales du processus d'estimation d'effort. Ces défis empêchent les gestionnaires des projets logiciels de construire des modèles réalistes d'estimation de l'effort pour les nouveaux projets de développement de logiciels. La motivation de ce projet de recherche est d'aider les organisations du développement logiciels et, en particulier, les gestionnaires des projets et les responsables techniques pour construire des modèles d'estimation de l’effort plus précis et ce en améliorant l'une des entrées du processus d'estimation de l'effort, afin d'améliorer la planification, la gestion et le développement des logiciels à des phases précoces du cycle de vie du développement des logiciels. Le but de ce projet de recherche est d'améliorer l'une des entrées du processus d'estimation de l'effort et en particulier la qualité de l’approximation de la taille fonctionnelle des nouveaux projets du développement des logiciels. L'objectif principal de la recherche est de concevoir un cadre de référence à être utilisé par les ingénieurs des exigences pour attribuer des facteurs d'échelle pour les premières versions de la spécification des exigences fonctionnelles du logiciel afin d’identifier leur niveau de granularité, ce qui se déroule généralement après l'étape de l'étude de faisabilité pour les nouveaux projets du développement logiciels. Pour atteindre cet objectif de recherche, les principales phases de la méthodologie de recherche sont: • la phase de recherche exploratoire: pour d'étudier l'impact du problème de recherche sur l'approximation de la taille fonctionnelle; • la phase de conception du cadre de référence: pour concevoir la cadre de référence qui attribue les facteurs d'échelle à des spécifications fonctionnelles des exigences fonctionnelles pour identifier leurs niveaux de granularité; et • la phase de vérification du cadre de référence: c’est la phase qui vérifie la convivialité du cadre de référence grâce aux différents groupes de participants ayant des profils d'expérience différents, et qui vérifie l'applicabilité de cadre de référence avec une variété d'études de cas représentant des systèmes logiciels différents. Le principal résultat de ce projet de recherche est un cadre de référence qui se compose: • d'un méta-modèle qui identifie les concepts et leurs relations qui doivent être recueillies par les ingénieurs des exigences pour atteindre la pleine spécification fonctionnelle des spécifications des exigences logicielles; et • les critères qui permettent d'identifier le niveau de granularité de la spécification des exigences logicielles, et de leur attribuer des facteurs d'échelle pour classer leurs niveaux de granularité. Le cadre de référence a été vérifié pour utilisation avec la même étude de cas par trois groupes de participants de l'industrie du génie logiciel, tandis que son applicabilité a été vérifiée avec quatre études de cas

    Improve software defect estimation with six sigma defect measures : empirical studies imputation techniques on ISBSG data repository with a high ratio of missing data

    Get PDF
    This research analysis work reports on a set of empirical studies tackling the research issues of improving software defect estimation models with Sigma defect measures (e.g., Sigma levels) using the ISBSG data repository with a high ratio of missing data. Three imputation techniques that were selected for this research work: single imputation, regression imputation, and stochastic regression imputation. These imputation techniques were used to impute the missing data within the variable ‘Total Number of Defects’, and were first compared with each other using common verification criteria. A further verification strategy was developed to compare and assess the performance of the selected imputation techniques through verifying the predictive accuracy of the obtained software defect estimation models form the imputed datasets. A Sigma-based classification was carried out on the imputed dataset of the better performance imputation technique on software defect estimation. This classification was used to determine at which levels of Sigma; the software projects can be best used to build software defect estimation models: which has resulted in Sigma-based datasets with Sigma ranging (e.g., dataset of software projects with a range from 3 Sigma to 4 Sigma). Finally, software defect estimation models were built on the Sigma-based datasets

    An i*-based Reengineering Framework for Requirements Engineering

    Get PDF
    Avui en dia, els sistemes d'informació són un actiu clau en les organitzacions i sovint els proporcionen un avantatges competitiu. Per a que això segueixi així, han de ser mantinguts i evolucionats d'acord amb els objectius estratègics de la organització. Aquesta evolució inclou els requeriments del sistema d'informació, la tecnologia emprada i els processos suportats. L'impacte dels canvis pot anar des de petites modificacions al desenvolupament d'un nou sistema d'informació i, per aquest motiu, l'evolució dels sistemes d'informació s'analitza durant la fase de requeriments, on es possible avaluar-ne la magnitud utilitzant menys recursos. Des d'aquest punt de vista, els mètodes de l'enginyeria de requeriments i els de la reenginyeria de processos sovint comparteixen els mateixos objectius i es pot considerar que la reenginyeria de processos es adequada tant per al desenvolupament com per al manteniment dels sistemes d'informació. El llenguatge i* està orientat a objectius i permet modelar els sistemes d'informació en termes d'actors i dependencies entre ells. El llenguatge i* s'utilitza en l'enginyeria de requeriments i en la reenginyeria de processos de negoci, però no existeixen gaires propostes comunes a ambdues disciplines. Amb l'objectiu d'utilitzar el llenguatge i* en la reenginyeria de processos, s'ha definit PRiM, un mètode basat en i* per a la reenginyeria de processos (Proces Reenginieering i*-based Method). PRiM assumeix que ja existeix un procés que s'utilitzarà com a punt de partida per l'especificació o l'evolució del nou sistema d'informació. El mètode PRiM consta de sis fases: 1) l'anàlisi dels processos i dels sistemes d'informació actuals, 2) la construcció del model i*, 3) la reenginyeria dels processos actuals, 4) la generació de models i* representant les diferents alternatives, 5) l'avaluació de les alternatives utilitzant mètriques estructurals i 6) l'especificació del nou sistema d'informació a partir del model i* escollit. En les sis fases de PRiM, s'utilitzen diferents mètodes i tècniques algunes creades expressament pel mètode i d'altres provinents de l'enginyeria de requeriments i la reenginyeria de processos. Tot i això, hi ha altres mètodes i tècniques que poden ser utilitzades enlloc d'aquestes i que poden ser mes convenients quan les condicions d'aplicació del mètode canvien. Per tal de permetre la selecció i inclusió d'altres tècniques, es proposa l'aplicació de l'enginyeria de mètodes (Method Engineering). Aquesta disciplina permet construir nous mètodes a partir de parts de mètodes ja existents, i s'ha utilitzat per definir un mètode marc per a la reenginyeria anomenat ReeF (Reengineering Framework). A ReeF, les sis fases de PRiM es presenten de forma genèrica per tal de permetre la selecció de la tècnica més apropiada per cada una de les fases, a partir de l'experiència de l'usuari com dels seus coneixements de l'aplicació. Com a exemple d'aplicació de ReeF, s'ha definit el mètode SARiM.Les contribucions principals de la tesis son dues. En primer lloc, els dos mètodes basats en i* definits (PRiM per a la reenginyeria de processos, i SARiM, per a la reenginyeria d'arquitectures software). En segon lloc, les diferents tècniques i* definides en PRiM i que poden ser utilitzades per construir models i*, generar alternatives i avaluar-les amb mètriques estructurals. Aquestes tècniques i mètodes s'han obtingut a partir de l'estudi de l'estat de l'art i s'han validat en diferents casos d'estudi formatius i en un cas d'estudi industrial. Com a suport, s'han desenvolupat dues eines: 1) REDEPEND-REACT, que permet la modelització gràfica de models i*, la generació d'alternatives i la definició de mètriques estructurals, i 2) J-PRiM, que dóna suport a les fases de PRiM mitjançant el tractament textual dels models i*.Information Systems are a crucial asset of the organizations and can provide competitive advantages to them. However, once the Information System is built, it has to be maintained and evolved, which includes changes on the requirements, the technology used, or the business processes supported. All these changes are diverse in nature and may require different treatments according to their impact, ranging from small improvements to the deployment of a new Information System. In both situations, changes are addressed at the requirements level, where decisions are analysed involving less resources. Because Requirements Engineering and Business Process Reengineering methods share common activities, and the design of the Information System with the business strategy has to be maintained during its evolution, a Business Process Reengineering approach is adequate for addressing Information Systems Development when there is an existing Information System to be used as starting point. The i* framework is a well-consolidated goal-oriented approach that allows to model Information Systems in a graphical way, in terms of actors and dependencies among them. The i* framework addresses Requirements Engineering and Business Process Reengineering but none of the i*-based existing approaches provides a complete framework for reengineering. In order to explore the applicability of i* for a reengineering framework, we have defined PRiM: a Process Reengineering i* Method, which assumes that there is an existing process that is the basis for the specification of the new Information System. PRiM is a six-phase method that combines techniques from the fields of Business Process Reengineering and Requirements Engineering and defines new techniques when needed. As a result PRiM addresses: 1) the analysis of the current process using socio-technical analysis techniques; 2) the construction of the i* model by differentiating the operationalization of the process form the strategic intentionality behind it; 3) the reengineering of the current process based on its analysis for improvements using goal acquisition techniques; 4) the generation of alternatives based on heuristics and patterns; 5) the evaluation of alternatives by defining structural metrics; and, 6) the specification of the new Information System from the selected i* model.There are several techniques from the Requirements Engineering and Business Process Reengineering fields, that can be used instead the ones selected in PRiM. Therefore, in order to not enforce the application of a certain technique we propose a more generic framework where to use and combine them. Method Engineering is the discipline that constructs new methods from parts of existing ones and, so, it is the approach adopted to define ReeF: a Reengineering Framework. In ReeF the six phases of PRiM are abstracted and generalized in order to allow selecting the most appropriate techniques for each of the phases, depending on the user expertise and the domain of application. As an example of the applicability of ReeF, the new method SARiM is defined. The main contributions of this work are twofold. On the one hand, two i*-based methods are defined: the PRiM method, which addresses process reengineering, and SARiM, which addresses software architecture reengineering. On the other hand, we provide several i*-based techniques to be used for constructing i* models, generating alternatives, and evaluating them using Structural Metrics. These methods and techniques are based on exhaustive review of existing work and their validation is done by means of several formative case studies and an industrial case study. Tool support has been developed for the approach: REDEPEND-REACT supporting the graphical modelling of i*, the generation of alternatives and the definition of Structural Metrics; and J-PRiM supporting all the phases of the PRiM method using a textual visualization of the i* models

    Schätzwerterfüllung in Softwareentwicklungsprojekten

    Get PDF
    Effort estimates are of utmost economic importance in software development projects. Estimates bridge the gap between managers and the invisible and almost artistic domain of developers. They give a means to managers to track and control projects. Consequently, numerous estimation approaches have been developed over the past decades, starting with Allan Albrecht's Function Point Analysis in the late 1970s. However, this work neither tries to develop just another estimation approach, nor focuses on improving accuracy of existing techniques. Instead of characterizing software development as a technological problem, this work understands software development as a sociological challenge. Consequently, this work focuses on the question, what happens when developers are confronted with estimates representing the major instrument of management control? Do estimates influence developers, or are they unaffected? Is it irrational to expect that developers start to communicate and discuss estimates, conform to them, work strategically, hide progress or delay? This study shows that it is inappropriate to assume an independency of estimated and actual development effort. A theory is developed and tested, that explains how developers and managers influence the relationship between estimated and actual development effort. The theory therefore elaborates the phenomenon of estimation fulfillment.Schätzwerte in Softwareentwicklungsprojekten sind von besonderer ökonomischer Wichtigkeit. Sie überbrücken die Lücke zwischen Projektleitern und der unsichtbaren und beinahe künstlerischen Domäne der Entwickler. Sie stellen ein Instrument dar, welches erlaubt, Projekte zu verfolgen und zu kontrollieren. Daher wurden in den vergangenen vier Jahrzehnten diverse Schätzverfahren entwickelt, beginnend mit der "Function Point" Analyse von Allan Albrecht. Diese Arbeit versucht allerdings weder ein neues Schätzverfahren zu entwickeln noch bestehende Verfahren zu verbessern. Anstatt Softwareentwicklung als technologisches Problem zu charakterisieren, wird in dieser Arbeit eine soziologische Perspektive genutzt. Dementsprechend fokussiert diese Arbeit die Frage, was passiert, wenn Entwickler mit Schätzwerten konfrontiert werden, die das wichtigste Kontrollinstrument des Managements darstellen? Lassen sich Entwickler von diesen Werten beeinflussen oder bleiben sie davon unberührt? Wäre es irrational, zu erwarten, dass Entwickler Schätzwerte kommunizieren, diese diskutieren, sich diesen anpassen, strategisch arbeiten sowie Verzögerungen verschleiern? Die vorliegende Studie zeigt, dass die Unabhängigkeitsannahme von Schätzwerten und tatsächlichem Entwicklungsaufwand unbegründet ist. Es wird eine Theorie entwickelt, welche erklärt, wie Entwickler und Projektleiter die Beziehung von Schätzungen und Aufwand beeinflussen und dass das Phänomen der Schätzwerterfüllung auftreten kann

    Design of a fuzzy logic software estimation process

    Get PDF
    This thesis describes the design of a fuzzy logic software estimation process. Studies show that most of the projects finish overbudget or later than the planned end date (Standish Group, 2009) even though the software organizations have attempted to increase the success rate of software projects by making the process more manageable and, consequently, more predictable. Project estimation is an important issue because it is the basis for the allocation and management of the resources associated to a project. When the estimation process is not performed properly, this leads to higher risks in their software projects, and the organizations may end up with losses instead of the expected profits from their funded projects. The most important estimates need to be made right in the very early phases of a project when the information is only available at a very high level of abstraction and, often, is based on a number of assumptions. The approach for estimating software projects in the software industry is the one typically based on the experience of the employees in the organization. There are a number of problems with using experience for estimation purposes: for instance, the way to obtain the estimate is only implicit, i.e. there is no consistent way to derive the estimated value, and the experience is strongly related to the experts, not to the organization. The research goal of this thesis is to design a software estimation process able to manage the lack of detailed and quantitative information embedded in the early phases of the software development life cycle. The research approach aims to leverage the advantages of the experience-based approach that can be used in early phases of software estimation while addressing some of the major problems generated by this estimation approach. The specific research objectives to be met by this improved software estimation process are: A. The proposed estimation process must use relevant techniques to handle uncertainty and ambiguity in order to consider the way practitioners make their estimates: the proposed estimation process must use the variables that the practitioners use. B. The proposed estimation process must be useful in early stages of the software development process. C. The proposed estimation process needs to preserve the experience or knowledge base for the organization: this implies an easy way to define and capture the experience of the experts. D. The proposed model must be usable by people with skills distinct from those of the people who configure the original context of the proposed model. In this thesis, an estimation process based on fuzzy logic is proposed, and is referred as the ‘Estimation of Projects in a Context of Uncertainty - EPCU’. The fuzzy logic approach was adopted for the proposed estimation process because it is a formal way to manage the uncertainty and the linguistic variables observed in the early phases of a project when the estimates need to be obtained: using a fuzzy system allows to capture the experience from the organization’s experts via inference rules and to keep this experience within the organization. The experimentation phase typically presents a big challenge, in software engineering in particular, and more so since the software projects estimates must be done “a priori”: indeed for verification purposes, there is a typically large elapsed time between the initial estimate and the completion of the projects upon which the ‘true’ values of effort, duration and costs can be known with certainty in order to verify whether or not the estimates were the right ones. This thesis includes a number of experiments with data from the software industry in Mexico. These experiments are organized into several scenarios, including one with reestimation of real projects completed in industry, but using – for estimation purposes - only the information that was available at the beginning of these projects. From the experiments results reported in this thesis it can be observed that with the use of the proposed fuzzy-logic based estimation process, estimates for these projects are better than the estimates based on the expert opinion approach. Finally, to handle the large amount of calculations required by the EPCU estimation model, as well as for the recording and the management of the information generated by the EPCU model, a research prototype tool was designed and developed to perform the necessary calculations

    Electromagnetic Waves

    Get PDF
    This book is dedicated to various aspects of electromagnetic wave theory and its applications in science and technology. The covered topics include the fundamental physics of electromagnetic waves, theory of electromagnetic wave propagation and scattering, methods of computational analysis, material characterization, electromagnetic properties of plasma, analysis and applications of periodic structures and waveguide components, and finally, the biological effects and medical applications of electromagnetic fields

    ICSEA 2021: the sixteenth international conference on software engineering advances

    Get PDF
    The Sixteenth International Conference on Software Engineering Advances (ICSEA 2021), held on October 3 - 7, 2021 in Barcelona, Spain, continued a series of events covering a broad spectrum of software-related topics. The conference covered fundamentals on designing, implementing, testing, validating and maintaining various kinds of software. The tracks treated the topics from theory to practice, in terms of methodologies, design, implementation, testing, use cases, tools, and lessons learnt. The conference topics covered classical and advanced methodologies, open source, agile software, as well as software deployment and software economics and education. The conference had the following tracks: Advances in fundamentals for software development Advanced mechanisms for software development Advanced design tools for developing software Software engineering for service computing (SOA and Cloud) Advanced facilities for accessing software Software performance Software security, privacy, safeness Advances in software testing Specialized software advanced applications Web Accessibility Open source software Agile and Lean approaches in software engineering Software deployment and maintenance Software engineering techniques, metrics, and formalisms Software economics, adoption, and education Business technology Improving productivity in research on software engineering Trends and achievements Similar to the previous edition, this event continued to be very competitive in its selection process and very well perceived by the international software engineering community. As such, it is attracting excellent contributions and active participation from all over the world. We were very pleased to receive a large amount of top quality contributions. We take here the opportunity to warmly thank all the members of the ICSEA 2021 technical program committee as well as the numerous reviewers. The creation of such a broad and high quality conference program would not have been possible without their involvement. We also kindly thank all the authors that dedicated much of their time and efforts to contribute to the ICSEA 2021. We truly believe that thanks to all these efforts, the final conference program consists of top quality contributions. This event could also not have been a reality without the support of many individuals, organizations and sponsors. We also gratefully thank the members of the ICSEA 2021 organizing committee for their help in handling the logistics and for their work that is making this professional meeting a success. We hope the ICSEA 2021 was a successful international forum for the exchange of ideas and results between academia and industry and to promote further progress in software engineering research
    corecore