53 research outputs found

    The Kconfig Variability Framework as a Feature Model

    Get PDF
    Zur einfachen Handhabung von Softwarevariabilität werden oft externe Werkzeuge eingesetzt. Ein solches Werkzeug ist Kconfig, welches vom Linux-Kernel zur Erstellung von konkreten Softwarekonfigurationen benutzt wird. Kconfig arbeitet mit Textdateien, in denen die Variabilitätsstruktur des zugehörigen Softwareprojekts definiert wird. Diese Dateien werden oft als Kconfig-Dateien bezeichnet. Kconfig-Dateien können analysiert werden, um Probleme in der Variabilitätsstruktur festzustellen. Feature-orientierte Programmierung (FOP) wird auch zur besseren Handhabung von Softwarevariabilität eingesetzt. Die Variabilitätsstruktur eines Softwareprojekts wird im Umfang von FOP in einem sogenannten Feature-Modell dargestellt. Es gibt Werkzeuge, welche zur Analyse von Feature-Modellen verwendet werden können. Diese kann man jedoch nicht zur Analyse von Kconfig-Dateien nutzen, da bisher eine Transformation zwischen Kconfig-Dateien und Feature-Modellen fehlt. In dieser Arbeit stellen wir eine Methodik zur korrekten Transformation von Kconfig-Dateien in Feature-Modelle vor, sodass Werkzeuge zur Feature-Modell-Analyse auch auf Kconfig-Dateien angewandt werden können. Wir evaluieren die Korrektheit unserer Transformation mit automatischen und manuellen Vorgehen. Unsere Methodik kann ausgewählte Kconfig-Dateien mit nichttrivialer Struktur erfolgreich in semantisch äquivalente Feature-Modelle überführen

    Decisioning 2022 : Collaboration in knowledge discovery and decision making: Applications to sustainable agriculture

    Get PDF
    Sustainable agriculture is one of the Sustainable Development Goals (SDG) proposed by UN (United Nations), but little systematic work on Knowledge Discovery and Decision Making has been applied to it. Knowledge discovery and decision making are becoming active research areas in the last years. The era of FAIR (Findable, Accessible, Interoperable, Reusable) data science, in which linked data with a high degree of variety and different degrees of veracity can be easily correlated and put in perspective to have an empirical and scientific perception of best practices in sustainable agricultural domain. This requires combining multiple methods such as elicitation, specification, validation, technologies from semantic web, information retrieval, formal concept analysis, collaborative work, semantic interoperability, ontological matching, specification, smart contracts, and multiple decision making. Decisioning 2022 is the first workshop on Collaboration in knowledge discovery and decision making: Applications to sustainable agriculture. It has been organized by six research teams from France, Argentina, Colombia and Chile, to explore the current frontier of knowledge and applications in different areas related to knowledge discovery and decision making. The format of this workshop aims at the discussion and knowledge exchange between the academy and industry members.Laboratorio de Investigación y Formación en Informática Avanzad

    Visualizing the customization endeavor in product-based-evolving software product lines: a case of action design research

    Get PDF
    [EN] Software Product Lines (SPLs) aim at systematically reusing software assets, and deriving products (a.k.a., variants) out of those assets. However, it is not always possible to handle SPL evolution directly through these reusable assets. Time-to-market pressure, expedited bug fixes, or product specifics lead to the evolution to first happen at the product level, and to be later merged back into the SPL platform where the core assets reside. This is referred to as product-based evolution. In this scenario, deciding when and what should go into the next SPL release is far from trivial. Distinct questions arise. How much effort are developers spending on product customization? Which are the most customized core assets? To which extent is the core asset code being reused for a given product? We refer to this endeavor as Customization Analysis, i.e., understanding the functional increments in adjusting products from the last SPL platform release. The scale of the SPLs' code-base calls for customization analysis to be conducted through Visual Analytics tools. This work addresses the design principles for such tools through a joint effort between academia and industry, specifically, Danfoss Drives, a company division in charge of the P400 SPL. Accordingly, we adopt an Action Design Research approach where answers are sought by interacting with the practitioners in the studied situations. We contribute by providing informed goals for customization analysis as well as an intervention in terms of a visual analytics tool. We conclude by discussing to what extent this experience can be generalized to product-based evolving SPL organizations other than Danfoss Drives.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This work is supported by the Spanish Ministry of Science, Innovation and Universities grant number RTI2018099818-B-I00 and MCIU-AEI TIN2017-90644-REDT (TASOVA). ONEKIN enjoys support from the program 'Grupos de Investigacion del Sistema Univesitario Vasco 2019-2021' under contract IT1235-19. Raul Medeiros enjoys a doctoral grant from the Spanish Ministry of Science and Innovation

    Actas del XXIV Workshop de Investigadores en Ciencias de la Computación: WICC 2022

    Get PDF
    Compilación de las ponencias presentadas en el XXIV Workshop de Investigadores en Ciencias de la Computación (WICC), llevado a cabo en Mendoza en abril de 2022.Red de Universidades con Carreras en Informátic

    Automatic generation of software interfaces for supporting decisionmaking processes. An application of domain engineering & machine learning

    Get PDF
    [EN] Data analysis is a key process to foster knowledge generation in particular domains or fields of study. With a strong informative foundation derived from the analysis of collected data, decision-makers can make strategic choices with the aim of obtaining valuable benefits in their specific areas of action. However, given the steady growth of data volumes, data analysis needs to rely on powerful tools to enable knowledge extraction. Information dashboards offer a software solution to analyze large volumes of data visually to identify patterns and relations and make decisions according to the presented information. But decision-makers may have different goals and, consequently, different necessities regarding their dashboards. Moreover, the variety of data sources, structures, and domains can hamper the design and implementation of these tools. This Ph.D. Thesis tackles the challenge of improving the development process of information dashboards and data visualizations while enhancing their quality and features in terms of personalization, usability, and flexibility, among others. Several research activities have been carried out to support this thesis. First, a systematic literature mapping and review was performed to analyze different methodologies and solutions related to the automatic generation of tailored information dashboards. The outcomes of the review led to the selection of a modeldriven approach in combination with the software product line paradigm to deal with the automatic generation of information dashboards. In this context, a meta-model was developed following a domain engineering approach. This meta-model represents the skeleton of information dashboards and data visualizations through the abstraction of their components and features and has been the backbone of the subsequent generative pipeline of these tools. The meta-model and generative pipeline have been tested through their integration in different scenarios, both theoretical and practical. Regarding the theoretical dimension of the research, the meta-model has been successfully integrated with other meta-model to support knowledge generation in learning ecosystems, and as a framework to conceptualize and instantiate information dashboards in different domains. In terms of the practical applications, the focus has been put on how to transform the meta-model into an instance adapted to a specific context, and how to finally transform this later model into code, i.e., the final, functional product. These practical scenarios involved the automatic generation of dashboards in the context of a Ph.D. Programme, the application of Artificial Intelligence algorithms in the process, and the development of a graphical instantiation platform that combines the meta-model and the generative pipeline into a visual generation system. Finally, different case studies have been conducted in the employment and employability, health, and education domains. The number of applications of the meta-model in theoretical and practical dimensions and domains is also a result itself. Every outcome associated to this thesis is driven by the dashboard meta-model, which also proves its versatility and flexibility when it comes to conceptualize, generate, and capture knowledge related to dashboards and data visualizations

    Análisis y resolución de los problemas asociados al diseño de sistemas de IOT

    Get PDF
    Al momento de diseñar un sistema de IoT, sin importar si se parte desde un sistema existente que trabaja de forma offline, o si se desea crear un sistema desde sus inicios, se presentarán los siguientes desafíos: En primer lugar, los sistemas de IoT pueden estar conformados por una amplia variedad de dispositivos, cada uno utilizando diferentes protocolos de comunicación y medios físicos para el establecimiento de la misma. Además, los dispositivos podrían encontrarse en ubicaciones geográficas muy distantes, en las que estén regidos por diferentes sistemas legales, y en las cuales la estructura de costos asociada a la conectividad entre los mismos sea muy diferente. Por otra parte, la selección del hardware asociado a cada dispositivo puede variar dependiendo de los riesgos asociados a la actividad en la que se los involucre; de los costos asociados a la adquisición, instalación y mantenimiento en la región geográfica donde se los despliegue; de los protocolos de comunicación que se deseen utilizar; del nivel de calidad deseada en el desempeño de cada dispositivo; y de otros factores técnicos o comerciales. La selección de las tecnologías de Software a utilizar en cada dispositivo podría depender de factores similares a aquellos mencionados en la selección del hardware. Además de estudiar las necesidades particulares de cada dispositivo, debe analizarse la arquitectura general del sistema de IoT. Esta arquitectura debe contemplar las diferentes formas de conectar a los dispositivos entre sí; las jerarquías de dispositivos; los servidores Web involucrados; los proveedores de servicios que serán contratados; los medios de almacenamiento, procesamiento y publicación de la información; las personas involucradas y los demás componentes internos o externos que interactúan en el sistema. Todas las consideraciones mencionadas previamente deben realizarse dentro de un marco de trabajo que garantice la privacidad y seguridad de la información tratada. Es por ello que en algunas regiones geográficas se han establecido diferentes legislaciones asociadas al tema, las cuales deben ser consideradas desde el comienzo del diseño del sistema de IoT. No obstante, si las reglas establecidas en las legislaciones no fueran lo suficientemente claras o completas (o incluso, inexistentes), pueden tomarse como fundamentos los estándares internacionales sobre privacidad y seguridad de los datos, en hardware y software. En este artículo, se presenta una línea de investigación que aborda el Análisis y Resolución de los Problemas Asociados al Diseño de Sistemas de IoT.Red de Universidades con Carreras en Informátic

    The Revolutionary Ecological Legacy of Herbert Marcuse

    Get PDF
    Marcuse argued that U.S.-led globalized capitalism represented the irrational perfection of waste and the degradation of the earth, resurgent sexism, racism, bigoted nationalism, and warlike patriotism. Inspired by the revolutionary legacy of Herbert Marcuse’s social and political philosophy, this volume appeals to the energies of those engaged in a wide range of contemporary social justice struggles: ecosocialism, antiracism, the women’s movement, LGBTQ rights, and antiwar forces. The intensification of these regressive political tendencies today must be countered, and this can be best accomplished through radical collaboration around an agenda recognizing the basic economic and political needs of diverse subaltern communities. Marcuse's Great Refusal is elaborated as a collective project of system negation that is becoming a new general interest

    OSS architecture for mixed-criticality systems – a dual view from a software and system engineering perspective

    Get PDF
    Computer-based automation in industrial appliances led to a growing number of logically dependent, but physically separated embedded control units per appliance. Many of those components are safety-critical systems, and require adherence to safety standards, which is inconsonant with the relentless demand for features in those appliances. Features lead to a growing amount of control units per appliance, and to a increasing complexity of the overall software stack, being unfavourable for safety certifications. Modern CPUs provide means to revise traditional separation of concerns design primitives: the consolidation of systems, which yields new engineering challenges that concern the entire software and system stack. Multi-core CPUs favour economic consolidation of formerly separated systems with one efficient single hardware unit. Nonetheless, the system architecture must provide means to guarantee the freedom from interference between domains of different criticality. System consolidation demands for architectural and engineering strategies to fulfil requirements (e.g., real-time or certifiability criteria) in safety-critical environments. In parallel, there is an ongoing trend to substitute ordinary proprietary base platform software components by mature OSS variants for economic and engineering reasons. There are fundamental differences of processual properties in development processes of OSS and proprietary software. OSS in safety-critical systems requires development process assessment techniques to build an evidence-based fundament for certification efforts that is based upon empirical software engineering methods. In this thesis, I will approach from both sides: the software and system engineering perspective. In the first part of this thesis, I focus on the assessment of OSS components: I develop software engineering techniques that allow to quantify characteristics of distributed OSS development processes. I show that ex-post analyses of software development processes can be used to serve as a foundation for certification efforts, as it is required for safety-critical systems. In the second part of this thesis, I present a system architecture based on OSS components that allows for consolidation of mixed-criticality systems on a single platform. Therefore, I exploit virtualisation extensions of modern CPUs to strictly isolate domains of different criticality. The proposed architecture shall eradicate any remaining hypervisor activity in order to preserve real-time capabilities of the hardware by design, while guaranteeing strict isolation across domains.Computergestützte Automatisierung industrieller Systeme führt zu einer wachsenden Anzahl an logisch abhängigen, aber physisch voneinander getrennten Steuergeräten pro System. Viele der Einzelgeräte sind sicherheitskritische Systeme, welche die Einhaltung von Sicherheitsstandards erfordern, was durch die unermüdliche Nachfrage an Funktionalitäten erschwert wird. Diese führt zu einer wachsenden Gesamtzahl an Steuergeräten, einhergehend mit wachsender Komplexität des gesamten Softwarekorpus, wodurch Zertifizierungsvorhaben erschwert werden. Moderne Prozessoren stellen Mittel zur Verfügung, welche es ermöglichen, das traditionelle >Trennung von Belangen< Designprinzip zu erneuern: die Systemkonsolidierung. Sie stellt neue ingenieurstechnische Herausforderungen, die den gesamten Software und Systemstapel betreffen. Mehrkernprozessoren begünstigen die ökonomische und effiziente Konsolidierung vormals getrennter Systemen zu einer effizienten Hardwareeinheit. Geeignete Systemarchitekturen müssen jedoch die Rückwirkungsfreiheit zwischen Domänen unterschiedlicher Kritikalität sicherstellen. Die Konsolidierung erfordert architektonische, als auch ingenieurstechnische Strategien um die Anforderungen (etwa Echtzeit- oder Zertifizierbarkeitskriterien) in sicherheitskritischen Umgebungen erfüllen zu können. Zunehmend werden herkömmliche proprietär entwickelte Basisplattformkomponenten aus ökonomischen und technischen Gründen vermehrt durch ausgereifte OSS Alternativen ersetzt. Jedoch hindern fundamentale Unterschiede bei prozessualen Eigenschaften des Entwicklungsprozesses bei OSS den Einsatz in sicherheitskritischen Systemen. Dieser erfordert Techniken, welche es erlauben die Entwicklungsprozesse zu bewerten um ein evidenzbasiertes Fundament für Zertifizierungsvorhaben basierend auf empirischen Methoden des Software Engineerings zur Verfügung zu stellen. In dieser Arbeit nähere ich mich von beiden Seiten: der Softwaretechnik, und der Systemarchitektur. Im ersten Teil befasse ich mich mit der Beurteilung von OSS Komponenten: Ich entwickle Softwareanalysetechniken, welche es ermöglichen, prozessuale Charakteristika von verteilten OSS Entwicklungsvorhaben zu quantifizieren. Ich zeige, dass rückschauende Analysen des Entwicklungsprozess als Grundlage für Softwarezertifizierungsvorhaben genutzt werden können. Im zweiten Teil dieser Arbeit widme ich mich der Systemarchitektur. Ich stelle eine OSS-basierte Systemarchitektur vor, welche die Konsolidierung von Systemen gemischter Kritikalität auf einer alleinstehenden Plattform ermöglicht. Dazu nutze ich Virtualisierungserweiterungen moderner Prozessoren aus, um die Hardware in strikt voneinander isolierten Rechendomänen unterschiedlicher Kritikalität unterteilen zu können. Die vorgeschlagene Architektur soll jegliche Betriebsstörungen des Hypervisors beseitigen, um die Echtzeitfähigkeiten der Hardware bauartbedingt aufrecht zu erhalten, während strikte Isolierung zwischen Domänen stets sicher gestellt ist
    corecore