60 research outputs found
Proceso de requisitos validado empíricamente
La construcción del Proceso de Requisitos basado en Escenarios, tema central de la presente tesis, ha comenzado hace más de dos décadas. Durante este tiempo, se lo ha revisado en diferentes proyectos de investigación, mientras que simultáneamente se lo ha aplicado en numerosos casos. La detección de algunos problemas ha generado una alerta en la calidad y consistencia de los modelos construidos y, por consiguiente, de los requisitos del software obtenidos. Su estrategia de construcción consta de tres etapas: Comprender el Universo de Discurso actual, Planificar el Universo de Discurso futuro y Explicitar los Requisitos del Software. En la secuencia de ejecución del proceso se utilizan básicamente dos modelos: el Léxico Extendido del Lenguaje (LEL) y los Escenarios, cada uno con sus particularidades. Tanto el proceso como los modelos han sido analizados empíricamente en esta tesis. Parte de los resultados obtenidos consisten en reemplazar la Actividad Derivar Escenarios Actuales con un nuevo mecanismo cognitivo que ayuda a construir una primera versión de los Escenarios de mejor calidad, ya que la actividad existente genera Escenarios con problemas de consistencia y completitud. El principal origen de estos problemas ha sido ignorar que el LEL es declarativo y los Escenarios son procedurales. La nueva heurística utiliza un mecanismo procedural e incremental por proximidad de situaciones, y, además, se nutre de todo el conocimiento existente en el macrosistema. En forma análoga, se ha detectado que la Heurística de Construcción del LEL, la cual es conducida por una lista inicial de símbolos, dificulta la detección de nuevos símbolos, afectando la completitud del glosario. En este caso también se propone reemplazarla por una nueva heurística que utiliza la lista inicial a modo recordatorio y propone identificar los símbolos por proximidad, arrojando mejoras significativas en los LEL construidos. Otros cambios son transversales, como es la incorporación de las jerarquías conceptuales y de los puntos de vista del contexto ya que afectan tanto al LEL como a los Escenarios. De esta manera se incluye información sensible para los requisitos, incrementando el nivel de detalle y la precisión de todas las representaciones. Con el objetivo de reducir los errores involuntarios, durante la descripción del LEL y de los Escenarios, se propone una Vista Clasificación, que puede activarse por demanda, con información adicional de cada símbolo del LEL. Las restantes modificaciones son nuevos modelos agregados al proceso, el primero de ellos es la Construcción del LEL de Requisitos, lo que ha puesto de manifiesto una importante omisión en el proceso existente al no contemplar la evolución que sufre el léxico durante el proceso de Ingeniería de Requisitos. La utilización del LEL en los Escenarios Futuros y en el documento de especificación de requisitos de software, paradójicamente, es una nueva fuente de ambigüedades ya que el léxico de los clientes y usuarios no es apto para describir el proceso del negocio futuro con el sistema de software en ejecución. Finalmente, el segundo agregado está relacionado con aquella información que aparece espontáneamente, pero que no tiene cabida en el modelo que se está construyendo. Esta información extemporánea requiere ser resguardada en el momento que aparece para recuperarla oportunamente, asegurando su comprensión cuando llegue el momento de incorporarla a un modelo. A tal efecto, se describe un mecanismo que ayuda el tratamiento efectivo de este tipo de información en cualquier proceso de requisitos. Todos los cambios y agregados al Proceso de Requisitos basado en Escenarios contribuyen a obtener una Especificación de Requisitos de Software de la mayor calidad posible, proporcionando modelos más completos y consistentes.The development of the Scenario-based Requirements Process, the main subject of this thesis, started two decades ago. Meanwhile, during that time, it has been reviewed in several research projects, while simultaneously it has been applied in many cases. The detection of some problems raised an alert on the quality and the evenness of the constructed models and, therefore, on the quality of the software requirements produced. Their construction process consists of three stages: comprehend the current context in which the future system will be introduced, plan how such context will behave in the future, and make explicit the software requirements. During the process, two models have used the Language Extended Lexicon and the Scenarios, each one with its distinctiveness. Both the process and the models has been empirically tested as a part of this thesis. Some of the obtained results comprise the substitution of the Derive Actual Scenarios Activity with a mechanism involving a new cognitive standpoint that helps to construct the first version of the Scenarios with better quality, improving the existing activity, which produces Scenarios with consistency and completeness problems. The main source of these problems has been disregarding that the LEL is declarative and the Scenarios are procedural. The new heuristic uses a procedural and incremental mechanism by proximity of situations, in addition besides it is fed by all the existent knowledge in the macro system. Likewise, it has been detected that the LEL Construction Heuristic, which is conducted by an initial list of symbols, difficult the detection of new symbols, harming the completeness of the glossary. In this case, it is also proposed to replace it with a new heuristic that uses the initial list as a reminder and recommends identifying the symbols by their proximity, leading to significant improvements in the LEL built. Some other changes introduced such as the incorporation of the conceptual hierarchies and the context points of view are of a wider scope since they affect the LEL and the Scenarios. Therefore, sensitive information for the requirements is added, incrementing the level of detail and the precision of all representations. In order to reduce unintentional errors during the description of the LEL and the Scenarios, a Classification View is proposed, it may be activated by demand giving additional information on each LEL symbol. The rest of the modifications are new models added to the process, the first one is the Requirements LEL Construction, which has revealed an important omission in the existent process, by not considering the evolution that suffers the lexicon along the Requirements Engineering process. The use of the LEL in the future Scenarios and the software requirements specification document, paradoxically, is a new source of ambiguities since the client’s and user´s lexicon is not apt to describe the future business process with the software system running. Finally, the second addition is related to the information that appears spontaneously but does not have room in the model that is currently being built. This extemporaneous information requires to be recorded when appears to be able to be recovered opportunely, allowing its comprehension when the moment of incorporating it to a new model comes. To that end, a mechanism, to help the treatment of this type of information in every requirement process, is described.Facultad de Informátic
Actas del XXIV Workshop de Investigadores en Ciencias de la Computación: WICC 2022
Compilación de las ponencias presentadas en el XXIV Workshop de Investigadores en Ciencias de la Computación (WICC), llevado a cabo en Mendoza en abril de 2022.Red de Universidades con Carreras en Informátic
Beyond 100: The Next Century in Geodesy
This open access book contains 30 peer-reviewed papers based on presentations at the 27th General Assembly of the International Union of Geodesy and Geophysics (IUGG). The meeting was held from July 8 to 18, 2019 in Montreal, Canada, with the theme being the celebration of the centennial of the establishment of the IUGG. The centennial was also a good opportunity to look forward to the next century, as reflected in the title of this volume. The papers in this volume represent a cross-section of present activity in geodesy, and highlight the future directions in the field as we begin the second century of the IUGG. During the meeting, the International Association of Geodesy (IAG) organized one Union Symposium, 6 IAG Symposia, 7 Joint Symposia with other associations, and 20 business meetings. In addition, IAG co-sponsored 8 Union Symposia and 15 Joint Symposia. In total, 3952 participants registered, 437 of them with IAG priority. In total, there were 234 symposia and 18 Workshops with 4580 presentations, of which 469 were in IAG-associated symposia. ; This volume will publish papers based on International Association of Geodesy (IAG) -related presentations made at the International Association of Geodesy at the 27th IUGG General Assembly, Montreal, July 2019. It will include papers associated with all of the IAG and joint symposia from the meeting, which span all aspects of modern geodesy, and linkages to earth and environmental sciences. It continues the long-running IAG Symposia Series
Leveraging Intermediate Artifacts to Improve Automated Trace Link Retrieval
Software traceability establishes a network of connections between diverse artifacts such as requirements, design, and code. However, given the cost and effort of creating and maintaining trace links manually, researchers have proposed automated approaches using information retrieval techniques. Current approaches focus almost entirely upon generating links between pairs of artifacts and have not leveraged the broader network of interconnected artifacts. In this paper we investigate the use of intermediate artifacts to enhance the accuracy of the generated trace links – focus- ing on paths consisting of source, target, and intermediate artifacts. We propose and evaluate combinations of techniques for computing semantic similarity, scaling scores across multiple paths, and aggregating results from multiple paths. We report results from five projects, including one large industrial project. We find that leverag- ing intermediate artifacts improves the accuracy of end-to-end trace retrieval across all datasets and accuracy metrics. After further analysis, we discover that leveraging intermediate artifacts is only helpful when a project’s artifacts share a common vocabulary, which tends to occur in refinement and decomposition hierarchies of artifacts. Given our hybrid approach that integrates both direct and transitive links, we observed little to no loss of accuracy when intermediate artifacts lacked a shared vocabulary with source or target artifacts
Towards a Model-Centric Software Testing Life Cycle for Early and Consistent Testing Activities
The constant improvement of the available computing power nowadays enables the accomplishment of more and more complex tasks. The resulting implicit increase in the complexity of hardware and software solutions for realizing the desired functionality requires a constant improvement of the development methods used. On the one hand over the last decades the percentage of agile development practices, as well as testdriven development increases. On the other hand, this trend results in the need to reduce the complexity with suitable methods. At this point, the concept of abstraction comes into play, which manifests itself in model-based approaches such as MDSD or MBT.
The thesis is motivated by the fact that the earliest possible detection and elimination of faults has a significant influence on product costs. Therefore, a holistic approach is developed in the context of model-driven development, which allows applying testing already in early phases and especially on the model artifacts, i.e. it provides a shift left of the testing activities. To comprehensively address the complexity problem, a modelcentric software testing life cycle is developed that maps the process steps and artifacts of classical testing to the model-level.
Therefore, the conceptual basis is first created by putting the available model artifacts of all domains into context. In particular, structural mappings are specified across the included domain-specific model artifacts to establish a sufficient basis for all the process steps of the life cycle. Besides, a flexible metamodel including operational semantics is developed, which enables experts to carry out an abstract test execution on the modellevel.
Based on this, approaches for test case management, automated test case generation, evaluation of test cases, and quality verification of test cases are developed. In the context of test case management, a mechanism is realized that enables the selection, prioritization, and reduction of Test Model artifacts usable for test case generation. I.e. a targeted set of test cases is generated satisfying quality criteria like coverage at the model-level. These quality requirements are accomplished by using a mutation-based analysis of the identified test cases, which builds on the model basis. As the last step of the model-centered software testing life cycle two approaches are presented, allowing an abstract execution of the test cases in the model context through structural analysis and a form of model interpretation concerning data flow information. All the approaches for accomplishing the problem are placed in the context of related work, as well as examined for their feasibility by of a prototypical implementation within the Architecture And Analysis Framework. Subsequently, the described approaches and their concepts are evaluated by qualitative as well as quantitative evaluation. Moreover, case studies show the practical applicability of the approach
Multi-Agent Systems
A multi-agent system (MAS) is a system composed of multiple interacting intelligent agents. Multi-agent systems can be used to solve problems which are difficult or impossible for an individual agent or monolithic system to solve. Agent systems are open and extensible systems that allow for the deployment of autonomous and proactive software components. Multi-agent systems have been brought up and used in several application domains
XX Workshop de Investigadores en Ciencias de la Computación - WICC 2018 : Libro de actas
Actas del XX Workshop de Investigadores en Ciencias de la Computación (WICC 2018), realizado en Facultad de Ciencias Exactas y Naturales y Agrimensura de la Universidad Nacional del Nordeste, los dìas 26 y 27 de abril de 2018.Red de Universidades con Carreras en Informática (RedUNCI
Supporting the grow-and-prune model for evolving software product lines
207 p.Software Product Lines (SPLs) aim at supporting the development of a whole family of software products through a systematic reuse of shared assets. To this end, SPL development is separated into two interrelated processes: (1) domain engineering (DE), where the scope and variability of the system is defined and reusable core-assets are developed; and (2) application engineering (AE), where products are derived by selecting core assets and resolving variability. Evolution in SPLs is considered to be more challenging than in traditional systems, as both core-assets and products need to co-evolve. The so-called grow-and-prune model has proven great flexibility to incrementally evolve an SPL by letting the products grow, and later prune the product functionalities deemed useful by refactoring and merging them back to the reusable SPL core-asset base. This Thesis aims at supporting the grow-and-prune model as for initiating and enacting the pruning. Initiating the pruning requires SPL engineers to conduct customization analysis, i.e. analyzing how products have changed the core-assets. Customization analysis aims at identifying interesting product customizations to be ported to the core-asset base. However, existing tools do not fulfill engineers needs to conduct this practice. To address this issue, this Thesis elaborates on the SPL engineers' needs when conducting customization analysis, and proposes a data-warehouse approach to help SPL engineers on the analysis. Once the interesting customizations have been identified, the pruning needs to be enacted. This means that product code needs to be ported to the core-asset realm, while products are upgraded with newer functionalities and bug-fixes available in newer core-asset releases. Herein, synchronizing both parties through sync paths is required. However, the state of-the-art tools are not tailored to SPL sync paths, and this hinders synchronizing core-assets and products. To address this issue, this Thesis proposes to leverage existing Version Control Systems (i.e. git/Github) to provide sync operations as first-class construct
XX Workshop de Investigadores en Ciencias de la Computación - WICC 2018 : Libro de actas
Actas del XX Workshop de Investigadores en Ciencias de la Computación (WICC 2018), realizado en Facultad de Ciencias Exactas y Naturales y Agrimensura de la Universidad Nacional del Nordeste, los dìas 26 y 27 de abril de 2018.Red de Universidades con Carreras en Informática (RedUNCI
- …