87 research outputs found

    Coopetition in an open-source way : lessons from mobile and cloud computing infrastructures

    Get PDF
    An increasing amount of technology is no longer developed in-house. Instead, we are in a new age where technology is developed by a networked community of individuals and organizations, who base their relations to each other on mutual interest. Advances arising from research in platforms, ecosystems, and infrastructures can provide valuable knowledge for better understanding and explaining technology development among a network of firms. More surprisingly, recent research suggests that technology can be jointly developed by rival competing firms in an open-source way. For instance, it is known that the mobile device makers Apple and Samsung continued collaborating in open-source projects while running expensive patent wars in the courts. On top of multidisciplinary theory in open-source software, cooperation among competitors (aka coopetition) and digital infrastructures, I (and my coauthors) explored how rival firms cooperate in the joint development of open-source infrastructures. While assimilating a wide variety of paradigms and analytical approaches, this doctoral research combined the qualitative analysis of naturally occurring data (QA) with the mining of software repositories (MSR) and social network analysis (SNA) within a set of case studies. By turning to the mobile and cloud computing industries in general, and the WebKit and OpenStack opensource infrastructures in particular, we found out that qualitative ethnographic materials, combined with social network visualizations, provide a rich medium that enables a better understanding of competitive and cooperative issues that are simultaneously present and interconnected in open-source infrastructures. Our research contributes back to managerial literature in coopetition strategy, but more importantly to Information Systems by addressing both cooperation and competition within the development of high-networked open-source infrastructures.Yhä suurempaa osaa teknologiasta ei enää kehitetä organisaatioiden omasta toimesta. Sen sijaan, olemme uudella aikakaudella jossa teknologiaa kehitetään verkostoituneessa yksilöiden ja organisaatioiden yhteisössä, missä toimitaan perustuen yhteiseen tavoitteeseen. Alustojen, ekosysteemien ja infrastruktuurien tutkimuksen tulokset voivat tuottaa arvokasta tietämystä teknologian kehittämisestä yritysten verkostossa. Erityisesti tuore tutkimustieto osoittaa että kilpailevat yritykset voivat yhdessä kehittää teknologiaa avoimeen lähdekoodiin perustuvilla käytännöillä. Esimerkiksi tiedetään että mobiililaitteiden valmistajat Apple ja Samsung tekivät yhteistyötä avoimen lähdekoodin projekteissa ja kävivät samaan aikaan kalliita patenttitaistoja eri oikeusfoorumeissa. Perustuen monitieteiseen teoriaan avoimen lähdekoodin ohjelmistoista, yhteistyöstä kilpailijoiden kesken (coopetition) sekä digitaalisista infrastruktuureista, minä (ja kanssakirjoittajani) tutkimme miten kilpailevat yritykset tekevät yhteistyötä avoimen lähdekoodin infrastruktuurien kehityksessä. Sulauttaessaan runsaan joukon paradigmoja ja analyyttisiä lähestymistapoja case-joukon puitteissa, tämä väitöskirjatutkimus yhdisti luonnollisesti esiintyvän datan kvantitatiivisen analyysin ohjelmapakettivarastojen louhintaan ja sosiaalisten verkostojen analyysiin. Tutkiessamme mobiili- ja pilvipalveluiden teollisuudenaloja yleisesti, ja WebKit ja OpenStack avoimen lähdekoodin infrastruktuureja erityisesti, havaitsimme että kvalitatiiviset etnografiset materiaalit yhdistettyinä sosiaalisten verkostojen visualisointiin tuottavat rikkaan aineiston joka mahdollistaa avoimen lähdekoodin infrastruktuuriin samanaikaisesti liittyvien kilpailullisten ja yhteistyökuvioiden hyvän ymmärtämisen. Tutkimuksemme antaa oman panoksensa johdon kirjallisuuteen coopetition strategy -alueella, mutta sitäkin enemmän tietojärjestelmätieteeseen, läpikäymällä sekä yhteistyötä että kilpailua tiiviisti verkostoituneessa avoimen lähdekoodin infrastruktuurien kehitystoiminnassaUma crescente quantidade de tecnologia não é desenvolvida internamente por uma só organização. Em vez disso, estamos em uma nova era em que a tecnologia é desenvolvida por uma comunidade de indivíduos e organizações que baseiam suas relações umas com as outras numa rede de interesse mútuo. Os avanços teórico decorrentes da pesquisa em plataformas computacionais, ecossistemas e infraestruturas digitais fornecem conhecimentos valiosos para uma melhor compreensão e explicação do desenvolvimento tecnológico por uma rede de multiplas empresas. Mais surpreendentemente, pesquisas recentes sugerem que tecnologia pode ser desenvolvida conjuntamente por empresas rivais concorrentes e de uma forma aberta (em código aberto). Por exemplo, sabe-se que os fabricantes de dispositivos móveis Apple e Samsung continuam a colaborar em projetos de código aberto ao mesmo tempo que se confrontam em caras guerras de patentes nos tribunais. Baseados no conhecimento científico de software de código aberto, de cooperação entre concorrentes (também conhecida como coopetição) e de infraestruturas digitais, eu e os meus co-autores exploramos como empresas concorrentes cooperam no desenvolvimento conjunto de infraestruturas de código aberto. Ao utilizar uma variedade de paradigmas e abordagens analíticas, esta pesquisa de doutoramento combinou a análise qualitativa de dados de ocorrência natural (QA) com a análise de repositórios de softwares (MSR) e a análise de redes sociais (SNA) dentro de um conjunto de estudos de casos. Ao investigar as industrias de technologias móveis e de computação em nuvem em geral, e as infraestruturas em código aberto WebKit e OpenStack, em particular, descobrimos que o material etnográfico qualitativo, combinado com visualizações de redes sociais, fornece um meio rico que permite uma melhor compreensão das problemas competitivos e cooperativos que estão simultaneamente presentes e interligados em infraestruturas de código aberto. A nossa pesquisa contribui para a literatura em gestão estratégica e coompetição, mas mais importante para literatura em Sistemas de Informação, abordando a cooperação e concorrência no desenvolvimento de infraestruturas de código aberto por uma rede the indivíduos e organizações em interesse mútuo

    Requirements engineering: foundation for software quality

    Get PDF

    From Bugs to Decision Support – Leveraging Historical Issue Reports in Software Evolution

    Get PDF
    Software developers in large projects work in complex information landscapes and staying on top of all relevant software artifacts is an acknowledged challenge. As software systems often evolve over many years, a large number of issue reports is typically managed during the lifetime of a system, representing the units of work needed for its improvement, e.g., defects to fix, requested features, or missing documentation. Efficient management of incoming issue reports requires the successful navigation of the information landscape of a project. In this thesis, we address two tasks involved in issue management: Issue Assignment (IA) and Change Impact Analysis (CIA). IA is the early task of allocating an issue report to a development team, and CIA is the subsequent activity of identifying how source code changes affect the existing software artifacts. While IA is fundamental in all large software projects, CIA is particularly important to safety-critical development. Our solution approach, grounded on surveys of industry practice as well as scientific literature, is to support navigation by combining information retrieval and machine learning into Recommendation Systems for Software Engineering (RSSE). While the sheer number of incoming issue reports might challenge the overview of a human developer, our techniques instead benefit from the availability of ever-growing training data. We leverage the volume of issue reports to develop accurate decision support for software evolution. We evaluate our proposals both by deploying an RSSE in two development teams, and by simulation scenarios, i.e., we assess the correctness of the RSSEs' output when replaying the historical inflow of issue reports. In total, more than 60,000 historical issue reports are involved in our studies, originating from the evolution of five proprietary systems for two companies. Our results show that RSSEs for both IA and CIA can help developers navigate large software projects, in terms of locating development teams and software artifacts. Finally, we discuss how to support the transfer of our results to industry, focusing on addressing the context dependency of our tool support by systematically tuning parameters to a specific operational setting

    OSS architecture for mixed-criticality systems – a dual view from a software and system engineering perspective

    Get PDF
    Computer-based automation in industrial appliances led to a growing number of logically dependent, but physically separated embedded control units per appliance. Many of those components are safety-critical systems, and require adherence to safety standards, which is inconsonant with the relentless demand for features in those appliances. Features lead to a growing amount of control units per appliance, and to a increasing complexity of the overall software stack, being unfavourable for safety certifications. Modern CPUs provide means to revise traditional separation of concerns design primitives: the consolidation of systems, which yields new engineering challenges that concern the entire software and system stack. Multi-core CPUs favour economic consolidation of formerly separated systems with one efficient single hardware unit. Nonetheless, the system architecture must provide means to guarantee the freedom from interference between domains of different criticality. System consolidation demands for architectural and engineering strategies to fulfil requirements (e.g., real-time or certifiability criteria) in safety-critical environments. In parallel, there is an ongoing trend to substitute ordinary proprietary base platform software components by mature OSS variants for economic and engineering reasons. There are fundamental differences of processual properties in development processes of OSS and proprietary software. OSS in safety-critical systems requires development process assessment techniques to build an evidence-based fundament for certification efforts that is based upon empirical software engineering methods. In this thesis, I will approach from both sides: the software and system engineering perspective. In the first part of this thesis, I focus on the assessment of OSS components: I develop software engineering techniques that allow to quantify characteristics of distributed OSS development processes. I show that ex-post analyses of software development processes can be used to serve as a foundation for certification efforts, as it is required for safety-critical systems. In the second part of this thesis, I present a system architecture based on OSS components that allows for consolidation of mixed-criticality systems on a single platform. Therefore, I exploit virtualisation extensions of modern CPUs to strictly isolate domains of different criticality. The proposed architecture shall eradicate any remaining hypervisor activity in order to preserve real-time capabilities of the hardware by design, while guaranteeing strict isolation across domains.Computergestützte Automatisierung industrieller Systeme führt zu einer wachsenden Anzahl an logisch abhängigen, aber physisch voneinander getrennten Steuergeräten pro System. Viele der Einzelgeräte sind sicherheitskritische Systeme, welche die Einhaltung von Sicherheitsstandards erfordern, was durch die unermüdliche Nachfrage an Funktionalitäten erschwert wird. Diese führt zu einer wachsenden Gesamtzahl an Steuergeräten, einhergehend mit wachsender Komplexität des gesamten Softwarekorpus, wodurch Zertifizierungsvorhaben erschwert werden. Moderne Prozessoren stellen Mittel zur Verfügung, welche es ermöglichen, das traditionelle >Trennung von Belangen< Designprinzip zu erneuern: die Systemkonsolidierung. Sie stellt neue ingenieurstechnische Herausforderungen, die den gesamten Software und Systemstapel betreffen. Mehrkernprozessoren begünstigen die ökonomische und effiziente Konsolidierung vormals getrennter Systemen zu einer effizienten Hardwareeinheit. Geeignete Systemarchitekturen müssen jedoch die Rückwirkungsfreiheit zwischen Domänen unterschiedlicher Kritikalität sicherstellen. Die Konsolidierung erfordert architektonische, als auch ingenieurstechnische Strategien um die Anforderungen (etwa Echtzeit- oder Zertifizierbarkeitskriterien) in sicherheitskritischen Umgebungen erfüllen zu können. Zunehmend werden herkömmliche proprietär entwickelte Basisplattformkomponenten aus ökonomischen und technischen Gründen vermehrt durch ausgereifte OSS Alternativen ersetzt. Jedoch hindern fundamentale Unterschiede bei prozessualen Eigenschaften des Entwicklungsprozesses bei OSS den Einsatz in sicherheitskritischen Systemen. Dieser erfordert Techniken, welche es erlauben die Entwicklungsprozesse zu bewerten um ein evidenzbasiertes Fundament für Zertifizierungsvorhaben basierend auf empirischen Methoden des Software Engineerings zur Verfügung zu stellen. In dieser Arbeit nähere ich mich von beiden Seiten: der Softwaretechnik, und der Systemarchitektur. Im ersten Teil befasse ich mich mit der Beurteilung von OSS Komponenten: Ich entwickle Softwareanalysetechniken, welche es ermöglichen, prozessuale Charakteristika von verteilten OSS Entwicklungsvorhaben zu quantifizieren. Ich zeige, dass rückschauende Analysen des Entwicklungsprozess als Grundlage für Softwarezertifizierungsvorhaben genutzt werden können. Im zweiten Teil dieser Arbeit widme ich mich der Systemarchitektur. Ich stelle eine OSS-basierte Systemarchitektur vor, welche die Konsolidierung von Systemen gemischter Kritikalität auf einer alleinstehenden Plattform ermöglicht. Dazu nutze ich Virtualisierungserweiterungen moderner Prozessoren aus, um die Hardware in strikt voneinander isolierten Rechendomänen unterschiedlicher Kritikalität unterteilen zu können. Die vorgeschlagene Architektur soll jegliche Betriebsstörungen des Hypervisors beseitigen, um die Echtzeitfähigkeiten der Hardware bauartbedingt aufrecht zu erhalten, während strikte Isolierung zwischen Domänen stets sicher gestellt ist

    Development of Agent-Based Simulation Models for Software Evolution

    Get PDF
    Software ist ein Bestandteil des alltäglichen Lebens für uns geworden. Dies ist auch mit zunehmenden Anforderungen an die Anpassungsfähigkeit an sich schnell ändernde Umgebungen verbunden. Dieser evolutionäre Prozess der Software wird von einem dem Software Engineering zugehörigen Forschungsbereich, der Softwareevolution, untersucht. Die Änderungen an einer Software über die Zeit werden durch die Arbeit der Entwickler verursacht. Aus diesem Grund stellt das Entwicklerverhalten einen zentralen Bestandteil dar, wenn man die Evolution eines Softwareprojekts analysieren möchte. Für die Analyse realer Projekte steht eine Vielzahl von Open Source Projekten frei zur Verfügung. Für die Simulation von Softwareprojekten benutzen wir Multiagentensysteme, da wir damit das Verhalten der Entwickler detailliert beschrieben können. In dieser Dissertation entwickeln wir mehrere, aufeinander aufbauende, agentenbasierte Modelle, die unterschiedliche Aspekte der Software Evolution abdecken. Wir beginnen mit einem einfachen Modell ohne Abhängigkeiten zwischen den Agenten, mit dem man allein durch das Entwicklerverhalten das Wachstum eines realen Projekts simulativ reproduzieren kann. Darauffolgende Modelle wurden um weitere Agenten, zum Beispiel unterschiedliche Entwickler-Typen und Fehler, sowie Abhängigkeiten zwischen den Agenten ergänzt. Mit diesen erweiterten Modellen lassen sich unterschiedliche Fragestellungen betreffend Software Evolution simulativ beantworten. Eine dieser Fragen beantwortet zum Beispiel was mit der Software bezüglich ihrer Qualität passiert, wenn der Hauptentwickler das Projekt plötzlich verlässt. Das komplexeste Modell ist in der Lage Software Refactorings zu simulieren und nutzt dazu Graph Transformationen. Die Simulation erzeugt als Ausgabe einen Graphen, der die Software repräsentiert. Als Repräsentant der Software dient der Change-Coupling-Graph, der für die Simulation von Refactorings erweitert wird. Dieser Graph wird in dieser Arbeit als \emph{Softwaregraph} bezeichnet. Um die verschiedenen Modelle zu parametrisieren haben wir unterschiedliche Mining-Werkzeuge entwickelt. Diese Werkzeuge ermöglichen es uns ein Modell mit projektspezifischen Parametern zu instanziieren, ein Modell mit einem Snapshot des analysierten Projektes zu instanziieren oder Transformationsregeln zu parametrisieren, die für die Modellierung von Refactorings benötigt werden. Die Ergebnisse aus drei Fallstudien zeigen unter anderem, dass unser Ansatz agentenbasierte Simulation für die Vorhersage der Evolution von Software Projekten eine geeignete Wahl ist. Des Weiteren konnten wir zeigen, dass mit einer geeigneten Parameterwahl unterschiedliche Wachstumstrends der realen Software simulativ reproduzierbar sind. Die besten Ergebnisse für den simulierten Softwaregraphen erhalten wir, wenn wir die Simulation nach einer initialen Phase mit einem Snapshot der realen Software starten. Die Refactorings betreffend konnten wir zeigen, dass das Modell basierend auf Graph Transformationen anwendbar ist und dass das simulierte Wachstum sich damit leicht verbessern lässt.Software has become a part of everyday life for us. This is also associated with increasing requirements for adaptability to rapidly changing environments. This evolutionary process of software is being studied by a software engineering related research area, called software evolution. The changes to a software over time are caused by the work of the developers. For this reason, the developer contribution behavior is central for analyzing the evolution of a software project. For the analysis of real projects, a variety of open source projects is freely available. For the simulation of software projects, we use multiagent systems because this allows us to describe the behavior of the developers in detail. In this thesis, we develop several successive agent-based models that cover different aspects of software evolution. We start with a simple model with no dependencies between the agents that can simulative reproduce the growth of a real project solely based on the developer’s contribution behavior. Subsequent models were supplemented by additional agents, such as different developer types and bugs, as well as dependencies between the agents. These advanced models can then be used to answer different questions concerning software evolution simulative. For example, one of these questions answers what happens to the software in terms of quality when the core developer suddenly leaves the project. The most complex model can simulate software refactorings based on graph transformations. The simulation output is a graph which represents the software. The representative of the software is the change coupling graph, which is extended for the simulation of refactorings. In this thesis, this graph is denoted as \emph{software graph}. To parameterize these models, we have developed different mining tools. These tools allow us to instantiate a model with project-specific parameters, to instantiate a model with a snapshot of the analyzed project, or to parameterize the transformation rules required to model refactorings. The results of three case studies show, among other things, that our approach to use agent-based simulation is an appropriate choice for predicting the evolution of software projects. Furthermore, we were able to show that different growth trends of the real software can be reproduced simulative with a suitable selection of simulation parameters. The best results for the simulated software graph are obtained when we start the simulation after an initial phase with a snapshot of real software. Regarding refactorings, we were able to show that the model based on graph transformations is applicable and that it can slightly improve the simulated growth

    Data modeling for web-based mobile tracking system of internally displaced person during conflict

    Get PDF
    The displaced families in the last two decades has become major issues in many countries due to the increase of natural disasters, armed conflicts or terrorist attacks. It presents great challenges to governments as well as the agencies which manage them. Many agencies reported the difficulty of providing relief to these families because they cannot be tracked after they registered in shelters or camps. It is due to random movement of the families, or the camp is exposed to natural disasters or armed attacks. This study proposes a requirement model for an internally displaced person (IDP) based on online interviews with experts from the International Organization for Migration and Government Officials who worked in direct contact with the displaced families. The requirements were used to develop a web-based mobile application to track, locate, document and verify IDP. An evaluation was conducted to measure the usability of the mobile application. The result of the evaluation suggested that the mobile application is relevant and suitable for tracking IDP. The main contribution of this study is the requirements for a mobile application that is designed specifically to track IDP

    Towards a philosophical understanding of agile software methodologies : the case of Kuhn versus Popper

    Get PDF
    This dissertation is original in using the contrasting ideas of two leading 20th century philosophers of science, Karl Popper and Thomas Kuhn, to provide a philosophical understanding, firstly, of the shift from traditional software methodologies to the so-called Agile methodologies, and, secondly, of the values, principles and practices underlying the most prominent of the Agile methodologies, Extreme Programming (XP). This dissertation will take a revisionist approach, following Fuller—the founder of social epistemology—in reading Popper against Kuhn's epistemological hegemony. The investigations in this dissertation relate to two main branches of philosophy— epistemology and ethics. The epistemological part of this dissertation compares both Kuhn and Popper's alternative ideas of the development of scientific knowledge to the Agile methodologists' ideas of the development of software, in order to assess the extent to which Agile software development resembles a scientific discipline. The investigations relating to ethics in this dissertation transfer concepts from social engineering—in particular, Popper's distinction between piecemeal and utopian social engineering—to software engineering, in order to assess both the democratic and authoritarian aspects of Agile software development and management. The use of Kuhn's ideas of scientific revolutions and paradigm shift by several leading figures of the Agile software methodologies—most notably, Kent Beck, the leader of the most prominent Agile software methodology, Extreme Programming (XP)—to predict a fundamental shift from traditional to Agile software methodologies, is critically assessed in this dissertation. A systematic investigation into whether Kuhn's theory as a whole, can provide an adequate account of the day-to-day practice of Agile software development is also provided. As an alternative to the use of Kuhn's ideas, the critical rationalist philosophy of Karl Popper is investigated. On the one hand, this dissertation assesses whether the epistemological aspects of Popper's philosophy—especially his notions of falsificationism, evolutionary epistemology, and three worlds metaphysics—provide a suitable framework for understanding the philosophical basis of everyday Agile software development. On the other hand, the aspects of Popper's philosophy relating to ethics, which provide an ideal for scientific practice in an open society, are investigated in order to determine whether they coincide with the avowedly democratic values of Agile software methodologies. The investigations in this dissertation led to the following conclusions. Firstly, Kuhn's ideas are useful in predicting the effects of the full-scale adoption of Agile methodologies, and they describe the way in which several leaders of the Agile methodologies promote their methodologies; they do not, however, account for the detailed methodological practice of Agile software development. Secondly, several aspects of Popper's philosophy, were found to be aligned with several aspects of Agile software development. In relation to epistemology, Popper's principle of falsificationism provides a criterion for understanding the rational and scientific basis of several Agile principles and practices, his evolutionary epistemology resembles the iterative-incremental design approach of Agile methodologies, and his three worlds metaphysical model provides an understanding of both the nature of software, and the approach advocated by the Agile methodologists' of creating and sharing knowledge. In relation to ethics, Popper's notion of an open society provides an understanding of the rational and ethical basis of the values underlying Agile software development and management, as well as the piecemeal adoption of Agile software methodologies.Dissertation (MSc)--University of Pretoria, 2009.Computer Scienceunrestricte

    Interactive Machine Learning for User-Innovation Toolkits – An Action Design Research approach

    Get PDF
    Machine learning offers great potential to developers and end users in the creative industries. However, to better support creative software developers' needs and empower them as machine learning users and innovators, the usability of and developer experience with machine learning tools must be considered and better understood. This thesis asks the following research questions: How can we apply a user-centred approach to the design of developer tools for rapid prototyping with Interactive Machine Learning? In what ways can we design better developer tools to accelerate and broaden innovation with machine learning? This thesis presents a three-year longitudinal action research study that I undertook within a multi-institutional consortium leading the EU H2020 -funded Innovation Action RAPID-MIX. The scope of the research presented here was the application of a user-centred approach to the design and evaluation of developer tools for rapid prototyping and product development with machine learning. This thesis presents my work in collaboration with other members of RAPID-MIX, including design and deployment of a user-centred methodology for the project, interventions for gathering requirements with RAPID-MIX consortium stakeholders and end users, and prototyping, development and evaluation of a software development toolkit for interactive machine learning. This thesis contributes with new understanding about the consequences and implications of a user-centred approach to the design and evaluation of developer tools for rapid prototyping of interactive machine learning systems. This includes 1) new understanding about the goals, needs, expectations, and challenges facing creative machine-learning non-expert developers and 2) an evaluation of the usability and design trade-offs of a toolkit for rapid prototyping with interactive machine learning. This thesis also contributes with 3) a methods framework of User-Centred Design Actions for harmonising User-Centred Design with Action Research and supporting the collaboration between action researchers and practitioners working in rapid innovation actions, and 4) recommendations for applying Action Research and User-Centred Design in similar contexts and scale

    Mining app reviews to support software engineering

    Get PDF
    The thesis studies how mining app reviews can support software engineering. App reviews —short user reviews of an app in app stores— provide a potentially rich source of information to help software development teams maintain and evolve their products. Exploiting this information is however difficult due to the large number of reviews and the difficulty in extracting useful actionable information from short informal texts. A variety of app review mining techniques have been proposed to classify reviews and to extract information such as feature requests, bug descriptions, and user sentiments but the usefulness of these techniques in practice is still unknown. Research in this area has grown rapidly, resulting in a large number of scientific publications (at least 182 between 2010 and 2020) but nearly no independent evaluation and description of how diverse techniques fit together to support specific software engineering tasks have been performed so far. The thesis presents a series of contributions to address these limitations. We first report the findings of a systematic literature review in app review mining exposing the breadth and limitations of research in this area. Using findings from the literature review, we then present a reference model that relates features of app review mining tools to specific software engineering tasks supporting requirements engineering, software maintenance and evolution. We then present two additional contributions extending previous evaluations of app review mining techniques. We present a novel independent evaluation of opinion mining techniques using an annotated dataset created for our experiment. Our evaluation finds lower effectiveness than initially reported by the techniques authors. A final part of the thesis, evaluates approaches in searching for app reviews pertinent to a particular feature. The findings show a general purpose search technique is more effective than the state-of-the-art purpose-built app review mining techniques; and suggest their usefulness for requirements elicitation. Overall, the thesis contributes to improving the empirical evaluation of app review mining techniques and their application in software engineering practice. Researchers and developers of future app mining tools will benefit from the novel reference model, detailed experiments designs, and publicly available datasets presented in the thesis
    • …
    corecore