24 research outputs found

    Global Growth and Trends of In-Body Communication Research—Insight From Bibliometric Analysis

    Get PDF
    A bibliometric analysis was conducted to examine research on in-body communication. This study aimed to assess the research growth in different countries, identify influential authors for potential international collaboration, investigate research challenges, and explore future prospects for in-body communication. A total of 148 articles written in English from journals and conference proceedings were gathered from the Scopus database. These articles cover the period from 2006 until August 2023. VOS Viewer 1.6.19 and Tableau Cloud were used to analyze the data. The analysis reveals that research on in-body communication has shown fluctuations but overall tends to increase. The United States, Finland, and Japan were identified as the leading countries (top three) in terms of publication quantity, while researchers from Norway, Finland, and Morocco received the highest number of citations. The University of Oulu in Finland has emerged as a productive institution in this field. Collaborative research opportunities exist with the countries mentioned above or with authors who have expertise in this topic. The dominant research topic within this field pertains to ultra-wideband (UWB) technology. One of the future challenges in this field is the exploration of optical wireless communication (OWC) as a potential communication medium for in-body devices, such as electronic devices implanted in the human body. This includes improving performance to meet the requirements for in-body communication devices. Additionally, this paper provides further insights into the progress of research on OWC for in-body communication conducted in our laboratory

    Wiring Circuits Is Easy as {0,1,ω}, or Is It...

    Get PDF
    Quantitative Type-Systems support fine-grained reasoning about term usage in our programming languages. Hardware Design Languages are another style of language in which quantitative typing would be beneficial. When wiring components together we must ensure that there are no unused ports, dangling wires, or accidental fan-ins and fan-outs. Although many wire usage checks are detectable using static analysis tools, such as Verilator, quantitative typing supports making these extrinsic checks an intrinsic aspect of the type-system. With quantitative typing of bound terms, we can provide design-time checks that all wires and ports have been used, and ensure that all wiring decisions are explicitly made, and are neither implicit nor accidental. We showcase the use of quantitative types in hardware design languages by detailing how we can retrofit quantitative types onto SystemVerilog netlists, and the impact that such a quantitative type-system has when creating designs. Netlists are gate-level descriptions of hardware that are produced as the result of synthesis, and it is from these netlists that hardware is generated (fabless or fabbed). First, we present a simple structural type-system for a featherweight version of SystemVerilog netlists that demonstrates how we can type netlists using standard structural techniques, and what it means for netlists to be type-safe but still lead to ill-wired designs. We then detail how to retrofit the language with quantitative types, make the type-system sub-structural, and detail how our new type-safety result ensures that wires and ports are used once. Our ideas have been proven both practically and formally by realising our work in Idris2, through which we can construct a verified language implementation that can type-check existing designs. From this work we can look to promote quantitative typing back up the synthesis chain to a more comprehensive hardware description language; and to help develop new and better hardware description languages with quantitative typing

    Revisiting the high-performance reconfigurable computing for future datacenters

    Get PDF
    Modern datacenters are reinforcing the computational power and energy efficiency by assimilating field programmable gate arrays (FPGAs). The sustainability of this large-scale integration depends on enabling multi-tenant FPGAs. This requisite amplifies the importance of communication architecture and virtualization method with the required features in order to meet the high-end objective. Consequently, in the last decade, academia and industry proposed several virtualization techniques and hardware architectures for addressing resource management, scheduling, adoptability, segregation, scalability, performance-overhead, availability, programmability, time-to-market, security, and mainly, multitenancy. This paper provides an extensive survey covering three important aspects-discussion on non-standard terms used in existing literature, network-on-chip evaluation choices as a mean to explore the communication architecture, and virtualization methods under latest classification. The purpose is to emphasize the importance of choosing appropriate communication architecture, virtualization technique and standard language to evolve the multi-tenant FPGAs in datacenters. None of the previous surveys encapsulated these aspects in one writing. Open problems are indicated for scientific community as well

    Addressing uncertainty in the safety assurance of machine-learning

    Get PDF
    There is increasing interest in the application of machine learning (ML) technologies to safety-critical cyber-physical systems, with the promise of increased levels of autonomy due to their potential for solving complex perception and planning tasks. However, demonstrating the safety of ML is seen as one of the most challenging hurdles to their widespread deployment for such applications. In this paper we explore the factors which make the safety assurance of ML such a challenging task. In particular we address the impact of uncertainty on the confidence in ML safety assurance arguments. We show how this uncertainty is related to complexity in the ML models as well as the inherent complexity of the tasks that they are designed to implement. Based on definitions of uncertainty as well as an exemplary assurance argument structure, we examine typical weaknesses in the argument and how these can be addressed. The analysis combines an understanding of causes of insufficiencies in ML models with a systematic analysis of the types of asserted context, asserted evidence and asserted inference within the assurance argument. This leads to a systematic identification of requirements on the assurance argument structure as well as supporting evidence. We conclude that a combination of qualitative arguments combined with quantitative evidence are required to build a robust argument for safety-related properties of ML functions that is continuously refined to reduce residual and emerging uncertainties in the arguments after the function has been deployed into the target environment

    Introduction to Runtime Verification

    Get PDF
    International audienceThe aim of this chapter is to act as a primer for those wanting to learn about Runtime Verification (RV). We start by providing an overview of the main specification languages used for RV. We then introduce the standard terminology necessary to describe the monitoring problem, covering the pragmatic issues of monitoring and instrumentation, and discussing extensively the monitorability problem

    A Survey of Challenges for Runtime Verification from Advanced Application Domains (Beyond Software)

    Get PDF
    Runtime verification is an area of formal methods that studies the dynamic analysis of execution traces against formal specifications. Typically, the two main activities in runtime verification efforts are the process of creating monitors from specifications, and the algorithms for the evaluation of traces against the generated monitors. Other activities involve the instrumentation of the system to generate the trace and the communication between the system under analysis and the monitor. Most of the applications in runtime verification have been focused on the dynamic analysis of software, even though there are many more potential applications to other computational devices and target systems. In this paper we present a collection of challenges for runtime verification extracted from concrete application domains, focusing on the difficulties that must be overcome to tackle these specific challenges. The computational models that characterize these domains require to devise new techniques beyond the current state of the art in runtime verification

    Model counting for reactive systems

    Get PDF
    Model counting is the problem of computing the number of solutions for a logical formula. In the last few years, it has been primarily studied for propositional logic, and has been shown to be useful in many applications. In planning, for example, propositional model counting has been used to compute the robustness of a plan in an incomplete domain. In information-flow control, model counting has been applied to measure the amount of information leaked by a security-critical system. In this thesis, we introduce the model counting problem for linear-time properties, and show its applications in formal verification. In the same way propositional model counting generalizes the satisfiability problem for propositional logic, counting models for linear-time properties generalizes the emptiness problem for languages over infinite words to one that asks for the number of words in a language. The model counting problem, thus, provides a foundation for quantitative extensions of model checking, where not only the existence of computations that violate the specification is determined, but also the number of such violations. We solve the model counting problem for the prominent class of omega-regular properties. We present algorithms for solving the problem for different classes of properties, and show the advantages of our algorithms in comparison to indirect approaches based on encodings into propositional logic. We further show how model counting can be used for solving a variety of quantitative problems in formal verification, including probabilistic model checking, quantitative information-flow in security-critical systems, and the synthesis of approximate implementations for reactive systems.Das Modellzählproblem fragt nach der Anzahl der Lösungen einer logischen Formel, und wurde in den letzten Jahren hauptsächlich für Aussagenlogik untersucht. Das Zählen von Modellen aussagenlogischer Formeln hat sich in vielen Anwendungen als nützlich erwiesen. Im Bereich der künstlichen Intelligenz wurde das Zählen von Modellen beispielsweise verwendet, um die Robustheit eines Plans in einem unvollständigen Weltmodell zu bewerten. Das Zählen von Modellen kann auch verwendet werden, um in sicherheitskritischen Systemen die Menge an enthüllten vertraulichen Daten zu messen. Diese Dissertation stellt das Modellzählproblem für Linearzeiteigenschaften vor, und untersucht dessen Rolle in der Welt der formalen Verifikation. Das Zählen von Modellen für Linearzeiteigenschaften führt zu neuen quantitativen Erweiterungen klassischer Verifikationsprobleme, bei denen nicht nur die Existenz eines Fehlers in einem System zu überprüfen ist, sondern auch die Anzahl solcher Fehler. Wir präsentieren Algorithmen zur Lösung des Modellzählproblems für verschiedene Klassen von Linearzeiteigenschaften und zeigen die Vorteile unserer Algorithmen im Vergleich zu indirekten Ansätzen, die auf Kodierungen der untersuchten Probleme in Aussagenlogik basieren. Darüberhinaus zeigen wir wie das Zählen von Modellen zur Lösung einer Vielzahl quantitativer Probleme in der formalen Verifikation verwendet werden kann. Dies beinhaltet unter anderem die Analyse probabilistischer Modelle, die Kontrolle quantitativen Informationsflusses in sicherheitskritischen Systemen, und die Synthese von approximativen Implementierungen für reaktive Systeme

    Fundamental Approaches to Software Engineering

    Get PDF
    This open access book constitutes the proceedings of the 24th International Conference on Fundamental Approaches to Software Engineering, FASE 2021, which took place during March 27–April 1, 2021, and was held as part of the Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg but changed to an online format due to the COVID-19 pandemic. The 16 full papers presented in this volume were carefully reviewed and selected from 52 submissions. The book also contains 4 Test-Comp contributions

    Fundamental Approaches to Software Engineering

    Get PDF
    This open access book constitutes the proceedings of the 25th International Conference on Fundamental Approaches to Software Engineering, FASE 2022, which was held during April 4-5, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 17 regular papers presented in this volume were carefully reviewed and selected from 64 submissions. The proceedings also contain 3 contributions from the Test-Comp Competition. The papers deal with the foundations on which software engineering is built, including topics like software engineering as an engineering discipline, requirements engineering, software architectures, software quality, model-driven development, software processes, software evolution, AI-based software engineering, and the specification, design, and implementation of particular classes of systems, such as (self-)adaptive, collaborative, AI, embedded, distributed, mobile, pervasive, cyber-physical, or service-oriented applications

    Synthesizing stream control

    Get PDF
    For the management of reactive systems, controllers must coordinate time, data streams, and data transformations, all joint by the high level perspective of their control flow. This control flow is required to drive the system correctly and continuously, which turns the development into a challenge. The process is error-prone, time consuming, unintuitive, and costly. An attractive alternative is to synthesize the system instead, where the developer only needs to specify the desired behavior. The synthesis engine then automatically takes care of all the technical details. However, while current algorithms for the synthesis of reactive systems are well-suited to handle control, they fail on complex data transformations due to the complexity of the comparably large data space. Thus, to overcome the challenge of explicitly handling the data we must separate data and control. We introduce Temporal Stream Logic (TSL), a logic which exclusively argues about the control of the controller, while treating data and functional transformations as interchangeable black-boxes. In TSL it is possible to specify control flow properties independently of the complexity of the handled data. Furthermore, with TSL at hand a synthesis engine can check for realizability, even without a concrete implementation of the data transformations. We present a modular development framework that first uses synthesis to identify the high level control flow of a program. If successful, the created control flow then is extended with concrete data transformations in order to be compiled into a final executable. Our results also show that the current synthesis approaches cannot replace existing manual development work flows immediately. During the development of a reactive system, the developer still may use incomplete or faulty specifications at first, that need the be refined after a subsequent inspection. In the worst case, constraints are contradictory or miss important assumptions, which leads to unrealizable specifications. In both scenarios, the developer needs additional feedback from the synthesis engine to debug errors for finally improving the system specification. To this end, we explore two further possible improvements. On the one hand, we consider output sensitive synthesis metrics, which allow to synthesize simple and well structured solutions that help the developer to understand and verify the underlying behavior quickly. On the other hand, we consider the extension of delay, whose requirement is a frequent reason for unrealizability. With both methods at hand, we resolve the aforementioned problems and therefore help the developer in the development phase with the effective creation of a safe and correct reactive system.Um reaktive Systeme zu regeln müssen Steuergeräte Zeit, Datenströme und Datentransformationen koordinieren, die durch den übergeordneten Kontrollfluss zusammengefasst werden. Die Aufgabe des Kontrollflusses ist es das System korrekt und dauerhaft zu betreiben. Die Entwicklung solcher Systeme wird dadurch zu einer Herausforderung, denn der Prozess ist fehleranfällig, zeitraubend, unintuitiv und kostspielig. Eine attraktive Alternative ist es stattdessen das System zu synthetisieren, wobei der Entwickler nur das gewünschte Verhalten des Systems festlegt. Der Syntheseapparat kümmert sich dann automatisch um alle technischen Details. Während aktuelle Algorithmen für die Synthese von reaktiven Systemen erfolgreich mit dem Kontrollanteil umgehen können, versagen sie jedoch, sobald komplexe Datentransformationen hinzukommen, aufgrund der Komplexität des vergleichsweise großen Datenraums. Daten und Kontrolle müssen demnach getrennt behandelt werden, um auch große Datenräumen effizient handhaben zu können. Wir präsentieren Temporal Stream Logic (TSL), eine Logik die ausschließlich die Kontrolle einer Steuerung betrachtet, wohingegen Daten und funktionale Datentransformationen als austauschbare Blackboxen gehandhabt werden. In TSL ist es möglich Kontrollflusseigenschaften unabhängig von der Komplexität der zugrunde liegenden Daten zu beschreiben. Des Weiteren kann ein auf TSL beruhender Syntheseapparat die Realisierbarkeit einer Spezifikation prüfen, selbst ohne die konkreten Implementierungen der Datentransformationen zu kennen. Wir präsentieren ein modulares Grundgerüst für die Entwicklung. Es verwendet zunächst den Syntheseapparat um den übergeordneten Kontrollfluss zu erzeugen. Ist dies erfolgreich, so wird der resultierende Kontrollfluss um die konkreten Implementierungen der Datentransformationen erweitert und anschließend zu einer ausführbare Anwendung kompiliert. Wir zeigen auch auf, dass bisherige Syntheseverfahren bereits existierende manuelle Entwicklungsprozesse noch nicht instantan ersetzen können. Im Verlauf der Entwicklung ist es auch weiterhin möglich, dass der Entwickler zunächst unvollständige oder fehlerhafte Spezifikationen erstellt, welche dann erst nach genauerer Betrachtung des synthetisierten Systems weiter verbessert werden können. Im schlimmsten Fall sind Anforderungen inkonsistent oder wichtige Annahmen über das Verhalten fehlen, was zu unrealisierbaren Spezifikationen führt. In beiden Fällen benötigt der Entwickler zusätzliche Rückmeldungen vom Syntheseapparat, um Fehler zu identifizieren und die Spezifikation schlussendlich zu verbessern. In diesem Zusammenhang untersuchen wir zwei mögliche Erweiterungen. Zum einen betrachten wir ausgabeabhängige Metriken, die es dem Entwickler erlauben einfache und wohlstrukturierte Lösungen zu synthetisieren die verständlich sind und deren Verhalten einfach zu verifizieren ist. Zum anderen betrachten wir die Erweiterung um Verzögerungen, welche eine der Hauptursachen für Unrealisierbarkeit darstellen. Mit beiden Methoden beheben wir die jeweils zuvor genannten Probleme und helfen damit dem Entwickler während der Entwicklungsphase auch wirklich das reaktive System zu kreieren, dass er sich auch tatsächlich vorstellt
    corecore