3 research outputs found

    Nuevas técnicas de inyección de fallos en sistemas embebidos mediante el uso de modelos virtuales descritos en el nivel de transacción

    Get PDF
    Mejor software y más rápido. Este es el desafío que se deriva de la necesidad de construir sistemas cada vez más inteligentes. En cualquier diseño embebido actual, el software es un componente fundamental que dota al sistema de una alta capacidad de configuración, gran número de funcionalidades y elasticidad en el comportamiento del sistema en situaciones excepcionales. Si además el desarrollo del conjunto hardware/software integrado en un System on Chip (SoC), forma parte de un sistema de control crítico donde se deben tener en cuenta requisitos de tolerancia a fallos, la verificación exhaustiva de los mismos consume un porcentaje cada vez más importante de los recursos totales dedicados al desarrollo y puesta en funcionamiento del sistema. En este contexto, el uso de metodologías clásicas de codiseño y coverificación es completamente ineficiente, siendo necesario el uso de nuevas tecnologías y herramientas para el desarrollo y verificación tempranos del software embebido. Entre ellas se puede incluir la propuesta en este trabajo de tesis, la cual aborda el problema mediante el uso de modelos ejecutables del hardware definidos en el nivel de transacción. Debido a los estrictos requisitos de robustez que imperan en el desarrollo de software espacial, es necesario llevar a cabo tareas de verificación en etapas muy tempranas del desarrollo para asegurar que los mecanismos de tolerancia a fallos, avanzados en la especificación del sistema, funcionan adecuadamente. De forma general, es deseable que estas tareas se realicen en paralelo con el desarrollo hardware, anticipando problemas o errores existentes en la especificación del sistema. Además, la verificación completa de los mecanismos de excepción implementados en el software, puede ser imposible de realizar en hardware real ya que los escenarios de fallo deben ser artificial y sistemáticamente generados mediante técnicas de inyección de fallos que permitan realizar campañas de inyección controlables, observables y reproducibles. En esta tesis se describe la investigación, desarrollo y uso de una plataforma virtual denominada "Leon2ViP", con capacidad de inyección de fallos y basada en interfaces SystemC/TLM2 para el desarrollo temprano y verificación de software embebido en el marco del proyecto Solar Orbiter. De esta forma ha sido posible ejecutar y probar exactamente el mismo código binario a ejecutar en el hardware real, pero en un entorno más controlable y determinista. Ello permite la realización de campañas de inyección de fallos muy focalizadas que no serían posible de otra manera. El uso de "\Leon2ViP" ha significado una mejora significante, en términos de coste y tiempo, en el desarrollo y verificación del software de arranque de la unidad de control del instrumento (ICU) del detector de partículas energéticas (EPD) embarcado en Solar Orbiter

    SoCRocket - A flexible and extensible Virtual Platform for the development of robust Embedded Systems

    Get PDF
    Der Schwerpunkt dieser Arbeit liegt in der Erhöhung des Abstraktionsniveaus im Entwurfsprozess, speziell dem Entwurf von Systemen auf Basis von Virtuellen Plattformen (VPs), Transaction-Level-Modellierung (TLM) und SystemC. Es wird eine ganzheitliche Methode vorgestellt, mit der komplexe eingebettete Systeme effizient modelliert werden können. Ergebnis ist eine der RTL-Synthese nahezu gleichgestellte Genauigkeit bei wesentlich höherer Flexibilität und Simulationsgeschwindigkeit. Das SoCRocket-System orientiert sich dazu an existierenden Standards und stellt Methoden zu deren effizientem Einsatz zur Verbesserung von Simulationsgeschwindigkeit und Simulationsgenauigkeit vor. So wird unter anderem gezeigt, wie moderne Multi-Kanal-Protokolle mit Split-Transfers durch Ausgleich des Intertransaktions-Timings ohne die Einführung zusätzlicher Protokollphasen zeitlich genau modelliert werden können. Standardisierungslücken in den Bereichen Speichermodellierung und Systemkonfiguration werden durch standardoffene Lösungen geschlossen. Darüber hinaus wird neue Infrastruktur zur Modellierung von Signalkommunikation auf Transaktionsebene, der Verifikation von Komponenten und der Modellierung des Energieverbrauchs vorgestellt. Zur Demonstration wurden die Kernkomponenten einer im europäischen Raumfahrtsektor maßgeblichen Hardwarebibliothek modelliert. Alle Komponenten wurden zunächst in Unit-Tests verifiziert und anschließend in einem Systemprototypen integriert. Zur Verifikation der Funktion, sowie Bestimmung von Simulationsgeschwindigkeit und zeitlicher Genauigkeit, wurde dieser für unterschiedliche Abstraktionsstufen konfiguriert und mit einem in VHDL beschriebenen RISC-Referenzentwurf (LEON3MP) verglichen. Das System mit losem Timing (LT) und blockierender Kommunikation ist im Durchschnitt 561-mal schneller als die RTL-Referenz und weist eine durchschnittliche Timing-Abweichung von 7,04% auf. Das System mit näherungsweise akkuratem Timing (AT) und nicht-blockierender Kommunikation ist 335-mal schneller. Die durchschnittliche Timing-Abweichung beträgt hier nur noch 3,03%, was einer Standardabweichung von 0.033 und damit einer sehr hohen statistischen Sicherheit entspricht. Die verschiedenen Abstraktionsniveaus können zur Realisierung mehrstufiger Architekturexplorationen eingesetzt werden. Dies wird am Beispiel einer hyperspektralen Bildkompression verdeutlicht.The focus of this work is raising the abstraction level in the development process, especially for the design of systems based on Virtual Platforms (VPs), Transaction Level Modeling (TLM), and SystemC. A holistic method for efficient modeling of complex embedded systems is presented. Results are accuracies close to RTL synthesis but at much higher flexibility, and simulation performance. The SoCRocket system integrates existing standards and introduces new methods for improvement of simulation performance and accuracy. It is shown, amongst others, how modern multi-channel protocols with split transfers can be accurately modeled by compensating inter-transaction timing without introducing additional protocol phases. Standardization gaps in the area of memory modeling and system configuration are closed by standard-open solutions. Furthermore, new infrastructure for modeling signal communication on transaction level, verification of components, and estimating power consumption are presented. All components have been verified in unit tests and were subsequently integrated in a system prototype. For functional verification, as well as measurement of simulation performance and accuracy, the prototype was configured for different abstractions and compared to a VHDL-based RISC reference design (LEON3MP). The loosely-timed platform prototype with blocking communication (LT) is in average 561 times faster than the RTL reference and shows an average timing deviation of 7,04%. The approximately-timed system (AT) with non-blocking communication is 335 times faster. Here, the timing deviation is only 3,03 %, corresponding to a standard deviation of 0.033, proving a very high statistic certainty. The system’s various abstraction levels can be exploited by a multi-stage architecture exploration. This is demonstrated by the example of a hyperspectral image compression

    On the simulation and design of manycore CMPs

    Get PDF
    The progression of Moore’s Law has resulted in both embedded and performance computing systems which use an ever increasing number of processing cores integrated in a single chip. Commercial systems are now available which provide hundreds of cores, and academics have proposed architectures for up to 1024 cores. Embedded multicores are increasingly popular as it is easier to guarantee hard-realtime constraints using individual cores dedicated for tasks, than to use traditional time-multiplexed processing. However, finding the optimal hardware configuration to meet these requirements at minimum cost requires extensive trial and error approaches to investigate the design space. This thesis tackles the problems encountered in the design of these large scale multicore systems by first addressing the problem of fast, detailed micro-architectural simulation. Initially addressing embedded systems, this work exploits the lack of hardware cache-coherence support in many deeply embedded systems to increase the available parallelism in the simulation. Then, through partitioning the NoC and using packet counting and cycle skipping reduces the amount of computation required to accurately model the NoC interconnect. In combination, this enables simulation speeds significantly higher than the state of the art, while maintaining less error, when compared to real hardware, than any similar simulator. Simulation speeds reach up to 370MIPS (Million (target) Instructions Per Second), or 110MHz, which is better than typical FPGA prototypes, and approaching final ASIC production speeds. This is achieved while maintaining an error of only 2.1%, significantly lower than other similar simulators. The thesis continues by scaling the simulator past large embedded systems up to 64-1024 core processors, adding support for coherent architectures using the same packet counting techniques along with low overhead context switching to enable the simulation of such large systems with stricter synchronisation requirements. The new interconnect model was partitioned to enable parallel simulation to further improve simulation speeds in a manner which did not sacrifice any accuracy. These innovations were leveraged to investigate significant novel energy saving optimisations to the coherency protocol, processor ISA, and processor micro-architecture. By introducing a new instruction, with the name wait-on-address, the energy spent during spin-wait style synchronisation events can be significantly reduced. This functions by putting the core into a low-power idle state while the cache line of the indicated address is monitored for coherency action. Upon an update or invalidation (or traditional timer or external interrupts) the core will resume execution, but the active energy of running the core pipeline and repeatedly accessing the data and instruction caches is effectively reduced to static idle power. The thesis also shows that existing combined software-hardware schemes to track data regions which do not require coherency can adequately address the directory-associativity problem, and introduces a new coherency sharer encoding which reduces the energy consumed by sharer invalidations when sharers are grouped closely together, such as would be the case with a system running many tasks with a small degree of parallelism in each. The research concludes by using the extremely fast simulation speeds developed to produce a large set of training data, collecting various runtime and energy statistics for a wide range of embedded applications on a huge diverse range of potential MPSoC designs. This data was used to train a series of machine learning based models which were then evaluated on their capacity to predict performance characteristics of unseen workload combinations across the explored MPSoC design space, using only two sample simulations, with promising results from some of the machine learning techniques. The models were then used to produce a ranking of predicted performance across the design space, and on average Random Forest was able to predict the best design within 89% of the runtime performance of the actual best tested design, and better than 93% of the alternative design space. When predicting for a weighted metric of energy, delay and area, Random Forest on average produced results within 93% of the optimum result. In summary this thesis improves upon the state of the art for cycle accurate multicore simulation, introduces novel energy saving changes the the ISA and microarchitecture of future multicore processors, and demonstrates the viability of machine learning techniques to significantly accelerate the design space exploration required to bring a new manycore design to market
    corecore