38 research outputs found

    Simulation multi-moteurs multi-niveaux pour la validation des spécifications système et optimisation de la consommation

    Get PDF
    This work aims at system-level modelling a defined transceiver for Bluetooth Low energy (BLE) system using SystemC-AMS. The goal is to analyze the relationship between the transceiver performance and the accurate energy consumption. This requires the transceiver model contains system-level simulation speed and the low-level design block power consumption and other RF specifications. The Meet-in-the-Middle approach and the Baseband Equivalent method are chosen to achieve the two requirements above. A global simulation of a complete BLE system is achieved by integrating the transceiver model into a SystemC-TLM described BLE system model which contains the higher-than-PHY levels. The simulation is based on a two BLE devices communication system and is run with different BLE use cases. The transceiver Bit-Error-Rate and the energy estimation are obtained at the end of the simulation. First, we modelled and validated each block of a BT transceiver. In front of the prohibitive simulation time, the RF blocks are rewritten by using the BBE methodology, and then refined in order to take into account the non-linearities, which are going to impact the couple consumption, BER. Each circuit (each model) is separately verified, and then a first BLE system simulation (point-to-point between a transmitter and a receiver) has been executed. Finally, the BER is finally estimated. This platform fulfills our expectations, the simulation time is suitable and the results have been validated with the circuit measurement offered by Riviera Waves Company. Finally, two versions of the same transceiver architecture are modelled, simulated and comparedCe travail vise la modélisation au niveau système, en langage SystemC-AMS, et la simulation d'un émetteur-récepteur au standard Bluetooth Low Energy (BLE). L'objectif est d'analyser la relation entre les performances, en termes de BER et la consommation d'énergie du transceiver. Le temps de simulation d’un tel système, à partir de cas d’étude (use case) réaliste, est un facteur clé pour le développement d’une telle plateforme. De plus, afin d’obtenir des résultats de simulation le plus précis possible, les modèles « haut niveau » doivent être raffinés à partir de modèles plus bas niveau où de mesure. L'approche dite Meet-in-the-Middle, associée à la méthode de modélisation équivalente en Bande Base (BBE, BaseBand Equivalent), a été choisie pour atteindre les deux conditions requises, à savoir temps de simulation « faible » et précision des résultats. Une simulation globale d'un système de BLE est obtenue en intégrant le modèle de l'émetteur-récepteur dans une plateforme existante développée en SystemC-TLM. La simulation est basée sur un système de communication de deux dispositifs BLE, en utilisant différents scénarios (différents cas d'utilisation de BLE). Dans un premier temps nous avons modélisé et validé chaque bloc d’un transceiver BT. Devant le temps de simulation prohibitif, les blocs RF sont réécrits en utilisant la méthodologie BB, puis raffinés afin de prendre en compte les non-linéarités qui vont impacter le couple consommation, BER. Chaque circuit (chaque modèle) est vérifié séparément, puis une première simulation système (point à point entre un émetteur et un récepteur) est effectué

    Methoden und Beschreibungssprachen zur Modellierung und Verifikation vonSchaltungen und Systemen: MBMV 2015 - Tagungsband, Chemnitz, 03. - 04. März 2015

    Get PDF
    Der Workshop Methoden und Beschreibungssprachen zur Modellierung und Verifikation von Schaltungen und Systemen (MBMV 2015) findet nun schon zum 18. mal statt. Ausrichter sind in diesem Jahr die Professur Schaltkreis- und Systementwurf der Technischen Universität Chemnitz und das Steinbeis-Forschungszentrum Systementwurf und Test. Der Workshop hat es sich zum Ziel gesetzt, neueste Trends, Ergebnisse und aktuelle Probleme auf dem Gebiet der Methoden zur Modellierung und Verifikation sowie der Beschreibungssprachen digitaler, analoger und Mixed-Signal-Schaltungen zu diskutieren. Er soll somit ein Forum zum Ideenaustausch sein. Weiterhin bietet der Workshop eine Plattform für den Austausch zwischen Forschung und Industrie sowie zur Pflege bestehender und zur Knüpfung neuer Kontakte. Jungen Wissenschaftlern erlaubt er, ihre Ideen und Ansätze einem breiten Publikum aus Wissenschaft und Wirtschaft zu präsentieren und im Rahmen der Veranstaltung auch fundiert zu diskutieren. Sein langjähriges Bestehen hat ihn zu einer festen Größe in vielen Veranstaltungskalendern gemacht. Traditionell sind auch die Treffen der ITGFachgruppen an den Workshop angegliedert. In diesem Jahr nutzen zwei im Rahmen der InnoProfile-Transfer-Initiative durch das Bundesministerium für Bildung und Forschung geförderte Projekte den Workshop, um in zwei eigenen Tracks ihre Forschungsergebnisse einem breiten Publikum zu präsentieren. Vertreter der Projekte Generische Plattform für Systemzuverlässigkeit und Verifikation (GPZV) und GINKO - Generische Infrastruktur zur nahtlosen energetischen Kopplung von Elektrofahrzeugen stellen Teile ihrer gegenwärtigen Arbeiten vor. Dies bereichert denWorkshop durch zusätzliche Themenschwerpunkte und bietet eine wertvolle Ergänzung zu den Beiträgen der Autoren. [... aus dem Vorwort

    Proceedings of the 5th International Workshop on Reconfigurable Communication-centric Systems on Chip 2010 - ReCoSoC\u2710 - May 17-19, 2010 Karlsruhe, Germany. (KIT Scientific Reports ; 7551)

    Get PDF
    ReCoSoC is intended to be a periodic annual meeting to expose and discuss gathered expertise as well as state of the art research around SoC related topics through plenary invited papers and posters. The workshop aims to provide a prospective view of tomorrow\u27s challenges in the multibillion transistor era, taking into account the emerging techniques and architectures exploring the synergy between flexible on-chip communication and system reconfigurability

    Virtual Prototyping Methodology for Power Automation Cyber-Physical-Systems

    Get PDF
    In this thesis, the author proposes a circular system development model which considers all the stages in a typical development process for industrial systems. In particular, the present work shows that the use of virtual prototyping at early stages of the system development may reduce the overall design and verification effort by allowing the exploration of the complete system architecture, and uncovering integration issues early on. The modeling techniques of this research are based on VHDL-AMS, yet supporting other modeling languages such as C/C++, SPICE, and Verilog-AMS, together with integrated simulation tools. Contrasting with conventional approaches, it is shown that the proposed methodology is adapted for small-scale Cyber-Physical Systems (CPS) design and verification thanks to the modularity and scalability of the modeling approach. The proposed modeling techniques enable seamlessly the CPS design together with the implementation of their subsystems. In particular, the contribution of this work improves the virtual prototyping approach that has been successfully used during the development of smart electrical sensors and monitoring equipment for high and medium voltage applications. The design of the measurement and self-calibration circuits of a medium voltage current sensor based on the Rogowski coil transducer is presented as an example. The proposed small-scale CPS design methodology based on virtual prototyping, namely VP-based design methodology, uses important theoretical concepts from layered design, component-based design, and platform-based design. These foundations are the basis to build a modeling methodology that provides a vehicle that can be used to improve system verification towards correct-by-design systems. The main contributions of this research are: the re-definition of the system development lifecycle by using a virtual prototyping methodology; the design and implementation of a model library that maximizes the reuse of computational models and their related IP; and a set of VHDL-AMS modeling guidelines established with the purpose of improving the modularity and scalability of virtual prototypes. These elements are key for supporting the introduction of virtual prototyping into industrial companies that can thoroughly profit from this approach, but cannot commit a specific team to the creation, support, and maintenance of computational models and its dedicated infrastructure. Thanks to the progressive nature of the proposed methodology, virtual prototypes can indeed be introduced with relatively low initial effort and enhanced over time. The presented methodology and its infrastructure may grow into a bidirectional communication medium between non-expert system designers (i.e. system architects and virtual integrators) and domain specialists such as mechanical designers, power electrical designers, embedded-electronics designers, and software designers. The proposed design methodology advocates the reduction of the CPS design complexity by the implementation of a meet-in-the-middle approach for system-level modeling. In this direction, the modeling techniques introduced in this work facilitate the architectural design space exploration, critical cross-domain variable analysis (especially important in the component interfaces), and system-level optimization and verification

    Test analysis & fault simulation of microfluidic systems

    Get PDF
    This work presents a design, simulation and test methodology for microfluidic systems, with particular focus on simulation for test. A Microfluidic Fault Simulator (MFS) has been created based around COMSOL which allows a fault-free system model to undergo fault injection and provide test measurements. A post MFS test analysis procedure is also described.A range of fault-free system simulations have been cross-validated to experimental work to gauge the accuracy of the fundamental simulation approach prior to further investigation and development of the simulation and test procedure.A generic mechanism, termed a fault block, has been developed to provide fault injection and a method of describing a low abstraction behavioural fault model within the system. This technique has allowed the creation of a fault library containing a range of different microfluidic fault conditions. Each of the fault models has been cross-validated to experimental conditions or published results to determine their accuracy.Two test methods, namely, impedance spectroscopy and Levich electro-chemical sensors have been investigated as general methods of microfluidic test, each of which has been shown to be sensitive to a multitude of fault. Each method has successfully been implemented within the simulation environment and each cross-validated by first-hand experimentation or published work.A test analysis procedure based around the Neyman-Pearson criterion has been developed to allow a probabilistic metric for each test applied for a given fault condition, providing a quantitive assessment of each test. These metrics are used to analyse the sensitivity of each test method, useful when determining which tests to employ in the final system. Furthermore, these probabilistic metrics may be combined to provide a fault coverage metric for the complete system.The complete MFS method has been applied to two system cases studies; a hydrodynamic “Y” channel and a flow cytometry system for prognosing head and neck cancer.Decision trees are trained based on the test measurement data and fault conditions as a means of classifying the systems fault condition state. The classification rules created by the decision trees may be displayed graphically or as a set of rules which can be loaded into test instrumentation. During the course of this research a high voltage power supply instrument has been developed to aid electro-osmotic experimentation and an impedance spectrometer to provide embedded test

    Automated Validation of State-Based Client-Centric Isolation with TLA <sup>+</sup>

    Get PDF
    Clear consistency guarantees on data are paramount for the design and implementation of distributed systems. When implementing distributed applications, developers require approaches to verify the data consistency guarantees of an implementation choice. Crooks et al. define a state-based and client-centric model of database isolation. This paper formalizes this state-based model in, reproduces their examples and shows how to model check runtime traces and algorithms with this formalization. The formalized model in enables semi-automatic model checking for different implementation alternatives for transactional operations and allows checking of conformance to isolation levels. We reproduce examples of the original paper and confirm the isolation guarantees of the combination of the well-known 2-phase locking and 2-phase commit algorithms. Using model checking this formalization can also help finding bugs in incorrect specifications. This improves feasibility of automated checking of isolation guarantees in synthesized synchronization implementations and it provides an environment for experimenting with new designs.</p

    Applying patterns in embedded systems design for managing quality attributes and their trade-offs

    Get PDF
    Embedded systems comprise one of the most important types of software-intensive systems, as they are pervasive and used in daily life more than any other type, e.g., in cars or in electrical appliances. When these systems operate under hard constraints, the violation of which can lead to catastrophic events, the system is classified as a critical embedded system (CES). The quality attributes related to these hard constraints are named critical quality attributes (CQAs). For example, the performance of the software for cruise-control or self-driving in a car are critical as they can potentially relate to harming human lives. Despite the growing body of knowledge on engineering CESs, there is still a lack of approaches that can support its design, while managing CQAs and their trade-offs with noncritical ones (e.g., maintainability and reusability). To address this gap, the state-of-research and practice on designing CES and managing quality trade-offs were explored, approaches to improve its design identified, and the merit of these approaches empirically investigated. When designing software, one common approach is to organize its components according to well-known structures, named design patterns. However, these patterns may be avoided in some classes of systems such as CES, as they are sometimes associated with the detriment of CQAs. In short, the findings reported in the thesis suggest that, when applicable, design patterns can promote CQAs while supporting the management of trade-offs. The thesis also reports on a phenomena, namely pattern grime, and factors that can influence the extent of the observed benefits

    Applying patterns in embedded systems design for managing quality attributes and their trade-offs

    Get PDF

    Signal processing algorithms for brain computer interfacing

    Get PDF
    A brain computer interface (BCI) allows the user to communicate with a computer using only brain signals. In this way, the conventional neural pathways of peripheral nerves and muscles are bypassed, thereby enabling control of a computer by a person with no motor control. The brain signals, known as electroencephalographs (EEGs), are recorded by electrodes placed on the surface of the scalp. A requirement for a successful BCI is that interfering artifacts are removed from the EEGs, so that thereby the important cognitive information is revealed. Two systems based on second order blind source separation (BSS) are therefore proposed. The first system, is based on developing a gradient based BSS algorithm, within which a constraint is incorporated such that the effect of eye blinking artifacts are mitigated from the constituent independent components (ICs). The second method is based on reconstructing the EEGs such that the effect of eye blinking artifacts are removed. The EEGs are separated using an unconstrained BSS algorithm, based on the principles of second order blind identification. Certain characteristics describing eye blinking artifacts are used to identify the related ICs. Then the remaining ICs are used to reconstruct the artifact free EEGs. Both methods yield significantly better results than standard techniques. The degree to which the artifacts are removed is shown and compared with standard methods, both subjectively and objectively. The proposed BCI systems are based on extracting the sources related to finger movement and tracking the movement of the corresponding signal sources. The first proposed system explicitly localises the sources over successive temporal windows of ICs using the least squares (LS) method and characterises the trajectories of the sources. A constrained BSS algorithm is then developed to separate the EEGs while mitigating the eye blinking artifacts. Another approach is based on inferring causal relationships between electrode signals. Directed transfer functions (DTFs) are also applied to short temporal windows of EEGs, from which a time-frequency map of causality is constructed. Additionally, the distribution of beta band power for the IC related to finger movement is combined with the DTF approach to form part of a robust classification system. Finally, a new modality for BCI is introduced based on space-time-frequency masking. Here the sources are assumed to be disjoint in space, time and frequency. The method is based on multi-way analysis of the EEGs and extraction of components related to finger movements. The components are localised in space-time-frequency and compared with the original EEGs in order to quantify the motion of the extracted component
    corecore