56 research outputs found

    Bridging MoCs in SystemC specifications of heterogeneous systems

    Get PDF
    In order to get an efficient specification and simulation of a heterogeneous system, the choice of an appropriate model of computation (MoC) for each system part is essential. The choice depends on the design domain (e.g., analogue or digital), and the suitable abstraction level used to specify and analyse the aspects considered to be important in each system part. In practice, MoC choice is implicitly made by selecting a suitable language and a simulation tool for each system part. This approach requires the connection of different languages and simulation tools when the specification and simulation of the system are considered as a whole. SystemC is able to support a more unified specification methodology and simulation environment for heterogeneous system, since it is extensible by libraries that support additional MoCs. A major requisite of these libraries is to provide means to connect system parts which are specified using different MoCs. However, these connection means usually do not provide enough flexibility to select and tune the right conversion semantic in amixed-level specification, simulation, and refinement process. In this article, converter channels, a flexible approach for MoC connection within a SystemC environment consisting of three extensions, namely, SystemC-AMS, HetSC, and OSSS+R, are presented.This work is supported by the FP6-2005-IST-5 European project

    Language Design for Reactive Systems: On Modal Models, Time, and Object Orientation in Lingua Franca and SCCharts

    Get PDF
    Reactive systems play a crucial role in the embedded domain. They continuously interact with their environment, handle concurrent operations, and are commonly expected to provide deterministic behavior to enable application in safety-critical systems. In this context, language design is a key aspect, since carefully tailored language constructs can aid in addressing the challenges faced in this domain, as illustrated by the various concurrency models that prevent the known pitfalls of regular threads. Today, many languages exist in this domain and often provide unique characteristics that make them specifically fit for certain use cases. This thesis evolves around two distinctive languages: the actor-oriented polyglot coordination language Lingua Franca and the synchronous statecharts dialect SCCharts. While they take different approaches in providing reactive modeling capabilities, they share clear similarities in their semantics and complement each other in design principles. This thesis analyzes and compares key design aspects in the context of these two languages. For three particularly relevant concepts, it provides and evaluates lean and seamless language extensions that are carefully aligned with the fundamental principles of the underlying language. Specifically, Lingua Franca is extended toward coordinating modal behavior, while SCCharts receives a timed automaton notation with an efficient execution model using dynamic ticks and an extension toward the object-oriented modeling paradigm

    Re-use of tests and arguments for assesing dependable mixed-critically systems

    Get PDF
    The safety assessment of mixed-criticality systems (MCS) is a challenging activity due to system heterogeneity, design constraints and increasing complexity. The foundation for MCSs is the integrated architecture paradigm, where a compact hardware comprises multiple execution platforms and communication interfaces to implement concurrent functions with different safety requirements. Besides a computing platform providing adequate isolation and fault tolerance mechanism, the development of an MCS application shall also comply with the guidelines defined by the safety standards. A way to lower the overall MCS certification cost is to adopt a platform-based design (PBD) development approach. PBD is a model-based development (MBD) approach, where separate models of logic, hardware and deployment support the analysis of the resulting system properties and behaviour. The PBD development of MCSs benefits from a composition of modular safety properties (e.g. modular safety cases), which support the derivation of mixed-criticality product lines. The validation and verification (V&V) activities claim a substantial effort during the development of programmable electronics for safety-critical applications. As for the MCS dependability assessment, the purpose of the V&V is to provide evidences supporting the safety claims. The model-based development of MCSs adds more V&V tasks, because additional analysis (e.g., simulations) need to be carried out during the design phase. During the MCS integration phase, typically hardware-in-the-loop (HiL) plant simulators support the V&V campaigns, where test automation and fault-injection are the key to test repeatability and thorough exercise of the safety mechanisms. This dissertation proposes several V&V artefacts re-use strategies to perform an early verification at system level for a distributed MCS, artefacts that later would be reused up to the final stages in the development process: a test code re-use to verify the fault-tolerance mechanisms on a functional model of the system combined with a non-intrusive software fault-injection, a model to X-in-the-loop (XiL) and code-to-XiL re-use to provide models of the plant and distributed embedded nodes suited to the HiL simulator, and finally, an argumentation framework to support the automated composition and staged completion of modular safety-cases for dependability assessment, in the context of the platform-based development of mixed-criticality systems relying on the DREAMS harmonized platform.La dificultad para evaluar la seguridad de los sistemas de criticidad mixta (SCM) aumenta con la heterogeneidad del sistema, las restricciones de diseño y una complejidad creciente. Los SCM adoptan el paradigma de arquitectura integrada, donde un hardware embebido compacto comprende múltiples plataformas de ejecución e interfaces de comunicación para implementar funciones concurrentes y con diferentes requisitos de seguridad. Además de una plataforma de computación que provea un aislamiento y mecanismos de tolerancia a fallos adecuados, el desarrollo de una aplicación SCM además debe cumplir con las directrices definidas por las normas de seguridad. Una forma de reducir el coste global de la certificación de un SCM es adoptar un enfoque de desarrollo basado en plataforma (DBP). DBP es un enfoque de desarrollo basado en modelos (DBM), en el que modelos separados de lógica, hardware y despliegue soportan el análisis de las propiedades y el comportamiento emergente del sistema diseñado. El desarrollo DBP de SCMs se beneficia de una composición modular de propiedades de seguridad (por ejemplo, casos de seguridad modulares), que facilitan la definición de líneas de productos de criticidad mixta. Las actividades de verificación y validación (V&V) representan un esfuerzo sustancial durante el desarrollo de aplicaciones basadas en electrónica confiable. En la evaluación de la seguridad de un SCM el propósito de las actividades de V&V es obtener las evidencias que apoyen las aseveraciones de seguridad. El desarrollo basado en modelos de un SCM incrementa las tareas de V&V, porque permite realizar análisis adicionales (por ejemplo, simulaciones) durante la fase de diseño. En las campañas de pruebas de integración de un SCM habitualmente se emplean simuladores de planta hardware-in-the-loop (HiL), en donde la automatización de pruebas y la inyección de faltas son la clave para la repetitividad de las pruebas y para ejercitar completamente los mecanismos de tolerancia a fallos. Esta tesis propone diversas estrategias de reutilización de artefactos de V&V para la verificación temprana de un MCS distribuido, artefactos que se emplearán en ulteriores fases del desarrollo: la reutilización de código de prueba para verificar los mecanismos de tolerancia a fallos sobre un modelo funcional del sistema combinado con una inyección de fallos de software no intrusiva, la reutilización de modelo a X-in-the-loop (XiL) y código a XiL para obtener modelos de planta y nodos distribuidos aptos para el simulador HiL y, finalmente, un marco de argumentación para la composición automatizada y la compleción escalonada de casos de seguridad modulares, en el contexto del desarrollo basado en plataformas de sistemas de criticidad mixta empleando la plataforma armonizada DREAMS.Kritikotasun nahastuko sistemen segurtasun ebaluazioa jarduera neketsua da beraien heterogeneotasuna dela eta. Sistema hauen oinarria arkitektura integratuen paradigman datza, non hardware konpaktu batek exekuzio plataforma eta komunikazio interfaze ugari integratu ahal dituen segurtasun baldintza desberdineko funtzio konkurrenteak inplementatzeko. Konputazio plataformek isolamendu eta akatsen aurkako mekanismo egokiak emateaz gain, segurtasun arauek definituriko jarraibideak jarraitu behar dituzte kritikotasun mistodun aplikazioen garapenean. Sistema hauen zertifikazio prozesuaren kostua murrizteko aukera bat plataformetan oinarritutako garapenean (PBD) datza. Garapen planteamendu hau modeloetan oinarrituriko garapena da (MBD) non modeloaren logika, hardware eta garapen desberdinak sistemaren propietateen eta portaeraren aurka aztertzen diren. Kritikotasun mistodun sistemen PBD garapenak etekina ateratzen dio moduluetan oinarrituriko segurtasun propietateei, adibidez: segurtasun kasu modularrak (MSC). Modulu hauek kritikotasun mistodun produktu-lerroak ere hartzen dituzte kontutan. Berifikazio eta balioztatze (V&V) jarduerek esfortzu kontsideragarria eskatzen dute segurtasun-kiritikoetarako elektronika programagarrien garapenean. Kritikotasun mistodun sistemen konfiantzaren ebaluazioaren eta V&V jardueren helburua segurtasun eskariak jasotzen dituzten frogak proportzionatzea da. Kritikotasun mistodun sistemen modelo bidezko garapenek zeregin gehigarriak atxikitzen dizkio V&V jarduerari, fase honetan analisi gehigarriak (hots, simulazioak) zehazten direlako. Bestalde, kritikotasun mistodun sistemen integrazio fasean, hardware-in-the-loop (Hil) simulazio plantek V&V iniziatibak sostengatzen dituzte non testen automatizazioan eta akatsen txertaketan funtsezko jarduerak diren. Jarduera hauek frogen errepikapena eta segurtasun mekanismoak egiaztzea ahalbidetzen dute. Tesi honek V&V artefaktuen berrerabilpenerako estrategiak proposatzen ditu, kritikotasun mistodun sistemen egiaztatze azkarrerako sistema mailan eta garapen prozesuko azken faseetaraino erabili daitezkeenak. Esate baterako, test kodearen berrabilpena akats aurkako mekanismoak egiaztatzeko, modelotik X-in-the-loop (XiL)-ra eta kodetik XiL-rako konbertsioa HiL simulaziorako eta argumentazio egitura bat DREAMS Europear proiektuan definituriko arkitektura estiloan oinarrituriko segurtasun kasu modularrak automatikoki eta gradualki sortzeko

    Master of Science

    Get PDF
    thesisThis document describes an improved method of formal verification of complex analog/mixed-signal (AMS) circuits. Currently, in our LEMA tool, verification properties are encoded using labeled Petri net (LPN). These LPNs are generated manually, a tedious process that requires the user to have considerable familiarity with the tool. To eliminate this time-consuming process, our LEMA tool is extended to include a translator that converts properties written in a property specification language to LPNs. New methods are also implemented to separate the transient period from the stable output period, thus improving the generated model. Also, the current methodology generates the circuit models for the input values used during the simulation of the circuit. So, models generated for other control input values are not accurate. In this case, accuracy of the generated models is improved by using a linear abstraction method like interpolation

    EARLY PERFORMANCE PREDICTION METHODOLOGY FOR MANY-CORES ON CHIP BASED APPLICATIONS

    Get PDF
    Modern high performance computing applications such as personal computing, gaming, numerical simulations require application-specific integrated circuits (ASICs) that comprises of many cores. Performance for these applications depends mainly on latency of interconnects which transfer data between cores that implement applications by distributing tasks. Time-to-market is a critical consideration while designing ASICs for these applications. Therefore, to reduce design cycle time, predicting system performance accurately at an early stage of design is essential. With process technology in nanometer era, physical phenomena such as crosstalk, reflection on the propagating signal have a direct impact on performance. Incorporating these effects provides a better performance estimate at an early stage. This work presents a methodology for better performance prediction at an early stage of design, achieved by mapping system specification to a circuit-level netlist description. At system-level, to simplify description and for efficient simulation, SystemVerilog descriptions are employed. For modeling system performance at this abstraction, queueing theory based bounded queue models are applied. At the circuit level, behavioral Input/Output Buffer Information Specification (IBIS) models can be used for analyzing effects of these physical phenomena on on-chip signal integrity and hence performance. For behavioral circuit-level performance simulation with IBIS models, a netlist must be described consisting of interacting cores and a communication link. Two new netlists, IBIS-ISS and IBIS-AMI-ISS are introduced for this purpose. The cores are represented by a macromodel automatically generated by a developed tool from IBIS models. The generated IBIS models are employed in the new netlists. Early performance prediction methodology maps a system specification to an instance of these netlists to provide a better performance estimate at an early stage of design. The methodology is scalable in nanometer process technology and can be reused in different designs

    Assessing and improving the quality of model transformations

    Get PDF
    Software is pervading our society more and more and is becoming increasingly complex. At the same time, software quality demands remain at the same, high level. Model-driven engineering (MDE) is a software engineering paradigm that aims at dealing with this increasing software complexity and improving productivity and quality. Models play a pivotal role in MDE. The purpose of using models is to raise the level of abstraction at which software is developed to a level where concepts of the domain in which the software has to be applied, i.e., the target domain, can be expressed e??ectively. For that purpose, domain-speci??c languages (DSLs) are employed. A DSL is a language with a narrow focus, i.e., it is aimed at providing abstractions speci??c to the target domain. This makes that the application of models developed using DSLs is typically restricted to describing concepts existing in that target domain. Reuse of models such that they can be applied for di??erent purposes, e.g., analysis and code generation, is one of the challenges that should be solved by applying MDE. Therefore, model transformations are typically applied to transform domain-speci??c models to other (equivalent) models suitable for di??erent purposes. A model transformation is a mapping from a set of source models to a set of target models de??ned as a set of transformation rules. MDE is gradually being adopted by industry. Since MDE is becoming more and more important, model transformations are becoming more prominent as well. Model transformations are in many ways similar to traditional software artifacts. Therefore, they need to adhere to similar quality standards as well. The central research question discoursed in this thesis is therefore as follows. How can the quality of model transformations be assessed and improved, in particular with respect to development and maintenance? Recall that model transformations facilitate reuse of models in a software development process. We have developed a model transformation that enables reuse of analysis models for code generation. The semantic domains of the source and target language of this model transformation are so far apart that straightforward transformation is impossible, i.e., a semantic gap has to be bridged. To deal with model transformations that have to bridge a semantic gap, the semantics of the source and target language as well as possible additional requirements should be well understood. When bridging a semantic gap is not straightforward, we recommend to address a simpli??ed version of the source metamodel ??rst. Finally, the requirements on the transformation may, if possible, be relaxed to enable automated model transformation. Model transformations that need to transform between models in di??erent semantic domains are expected to be more complex than those that merely transform syntax. The complexity of a model transformation has consequences for its quality. Quality, in general, is a subjective concept. Therefore, quality can be de??ned in di??erent ways. We de??ned it in the context of model transformation. A model transformation can either be considered as a transformation de??nition or as the process of transforming a source model to a target model. Accordingly, model transformation quality can be de??ned in two di??erent ways. The quality of the de??nition is referred to as its internal quality. The quality of the process of transforming a source model to a target model is referred to as its external quality. There are also two ways to assess the quality of a model transformation (both internal and external). It can be assessed directly, i.e., by performing measurements on the transformation de??nition, or indirectly, i.e., by performing measurements in the environment of the model transformation. We mainly focused on direct assessment of internal quality. However, we also addressed external quality and indirect assessment. Given this de??nition of quality in the context of model transformations, techniques can be developed to assess it. Software metrics have been proposed for measuring various kinds of software artifacts. However, hardly any research has been performed on applying metrics for assessing the quality of model transformations. For four model transformation formalisms with di??fferent characteristics, viz., for ASF+SDF, ATL, Xtend, and QVTO, we de??ned sets of metrics for measuring model transformations developed with these formalisms. While these metric sets can be used to indicate bad smells in the code of model transformations, they cannot be used for assessing quality yet. A relation has to be established between the metric sets and attributes of model transformation quality. For two of the aforementioned metric sets, viz., the ones for ASF+SDF and for ATL, we conducted an empirical study aiming at establishing such a relation. From these empirical studies we learned what metrics serve as predictors for di??erent quality attributes of model transformations. Metrics can be used to quickly acquire insights into the characteristics of a model transformation. These insights enable increasing the overall quality of model transformations and thereby also their maintainability. To support maintenance, and also development in a traditional software engineering process, visualization techniques are often employed. For model transformations this appears as a feasible approach as well. Currently, however, there are few visualization techniques available tailored towards analyzing model transformations. One of the most time-consuming processes during software maintenance is acquiring understanding of the software. We expect that this holds for model transformations as well. Therefore, we presented two complementary visualization techniques for facilitating model transformation comprehension. The ??rst-technique is aimed at visualizing the dependencies between the components of a model transformation. The second technique is aimed at analyzing the coverage of the source and target metamodels by a model transformation. The development of the metric sets, and in particular the empirical studies, have led to insights considering the development of model transformations. Also, the proposed visualization techniques are aimed at facilitating the development of model transformations. We applied the insights acquired from the development of the metric sets as well as the visualization techniques in the development of a chain of model transformations that bridges a number of semantic gaps. We chose to solve this transformational problem not with one model transformation, but with a number of smaller model transformations. This should lead to smaller transformations, which are more understandable. The language on which the model transformations are de??ned, was subject to evolution. In particular the coverage visualization proved to be bene??cial for the co-evolution of the model transformations. Summarizing, we de??ned quality in the context of model transformations and addressed the necessity for a methodology to assess it. Therefore, we de??ned metric sets and performed empirical studies to validate whether they serve as predictors for model transformation quality. We also proposed a number of visualizations to increase model transformation comprehension. The acquired insights from developing the metric sets and the empirical studies, as well as the visualization tools, proved to be bene??cial for developing model transformations

    Interactive Model-Based Compilation: A Modeller-Driven Development Approach

    Get PDF
    There is a growing tendency for using domain-specific languages, which help domain experts to stay focussed on abstract problem solutions. It is important to carefully design these languages and tools, which fundamentally perform model-to-model transformations. The quality of both usually decides the effectiveness of the subsequent development and therefore the quality of the final applications. However, as the complexity and safety requirements of modern systems grow, it becomes increasingly burdensome to create highly customized languages and difficult to provide reasonable overviews within these tools. This thesis introduces a new interactive model-based compilation methodology. Compilations for arbitrary model-to-model transformations are themselves described as models. They can be instantiated for particular inputs, e. g. a program, to create concrete compilation runs, which return the result of that compilation. The compilation instance is interactively observable. Intermediate results serve as new inputs and as documentation. They can be used to create highly customized views and facilitate understandability. This methodology guides modellers from the start of the compilation to the final result so that they can interactively refine their models. The methodology has been implemented and validated as the KIELER Compiler (KiCo) and is available as part of the KIELER open-source project. It is used to implement the current reference compiler for the SCCharts language, a statecharts dialect designed for specifying safety-critical reactive systems based on a synchronous model of computation. The interactive model-based compilation approach was key to the rapid prototyping of three different compilation strategies, as well as new language extensions, variations and closely related languages. The results are verified with benchmarks, which are again modelled using the same approach and technology. The usability of the SCCharts language and the KiCo tooling is documented with long-term surveys and real-life industrial, academic and teaching examples

    A situational awareness model for data analysis on 5G mobile networks : the SELFNET analyzer framework

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Ingeniería del Software e Inteligencia Artificial, leída el 14-07-2017Se espera que las redes 5G provean un entorno seguro, con able y de alto rendimiento con interrupciones m nimas en la provisi on de servicios avanzados de red, sin importar la localizaci on del dispositivo o cuando el servicio es requerido. Esta nueva generaci on de red ser a capaz de proporcionar altas velocidades, baja latencia y mejor Calidad de Servicio (QoS) comparado con las redes actuales Long Term Evolution (LTE). Para proveer estas capacidades, 5G propone la combinaci on de tecnolog as avanzadas tales como Redes De nidas por Software (SDN), Virtualizaci on de las Funciones de Red (NFV), Redes auto-organizadas (SON) e Inteligencia Arti cial. De manera especial, 5G ser a capaz de solucionar o mitigar cambios inesperados o problemas t picos de red a trav es de la identi caci on de situaciones espec cas, tomando en cuenta las necesidades del usuario y los Acuerdos de Nivel de Servicio (SLAs). Actualmente, los principales operadores de red y la comunidad cient ca se encuentran trabajando en estrategias para facilitar el an alisis de datos y el proceso de toma de decisiones cuando eventos espec cos comprometen la salud de las redes 5G. Al mismo tiempo, el concepto de Conciencia Situacional (SA) y los modelos de gesti on de incidencias aplicados a redes 5G est an en etapa temprana de desarrollo. La idea principal detr as de estos conceptos es prevenir o mitigar situaciones nocivas de manera reactiva y proactiva. En este contexto, el proyecto Self-Organized Network Management in Virtualized and Software De ned Networks (SELFNET) combina los conceptos de SDN, NFV and SON para proveer un marco de gesti on aut onomo e inteligente para redes 5G. SELFNET resuelve problemas comunes de red, mientras mejora la calidad de servicio (QoS) y la Calidad de Experiencia (QoE) de los usuarios nales...5G networks hope to provide a secure, reliable and high-performance environment with minimal disruptions in the provisioning of advanced network services, regardless the device location or when the service is required. This new network generation will be able to deliver ultra-high capacity, low latency and better Quality of Service (QoS) compared with current Long Term Evolution (LTE) networks. In order to provide these capabilities, 5G proposes the combination of advanced technologies such as Software De ned Networking (SDN), Network Function Virtualization (NFV), Self-organized Networks (SON) or Arti cial Intelligence. In particular, 5G will be able to face unexpected changes or network problems through the identi cation of speci c situations, taking into account the user needs and the Service Level Agreements (SLAs). Nowadays, the main telecommunication operators and community research are working in strategies to facilitate the data analysis and decision-making process when unexpected events compromise the health in 5G Networks. Meanwhile, the concept of Situational Awareness (SA) and incident management models applied to 5G Networks are also in an early stage. The key idea behind these concepts is to mitigate or prevent harmful situations in a reactive and proactive way. In this context, Self-Organized Network Management in Virtualized and Software De ned Networks Project (SELFNET) combines SDN, NFV and SON concepts to provide a smart autonomic management framework for 5G networks. SELFNET resolves common network problems, while improving the QoS and Quality of Experience (QoE) of end users...Depto. de Ingeniería de Software e Inteligencia Artificial (ISIA)Fac. de InformáticaTRUEunpu
    corecore