9 research outputs found

    Predictable multi-processor system on chip design for multimedia applications

    Get PDF
    The design of multimedia systems has become increasingly complex due to consumer requirements. Consumers demand the functionalities offered by a huge desktop from these systems. Many of these systems are mobile. Therefore, power consumption and size of these devices should be small. These systems are increasingly becoming multi-processor based (MPSoCs) for the reasons of power and performance. Applications execute on these systems in different combinations also known as use-cases. Applications may have different performance requirements in each use-case. Currently, verification of all these use-cases takes bulk of the design effort. There is a need for analysis based techniques so that the platforms have a predictable behaviour and in turn provide guarantees on performance without expending precious man hours on verification. In this dissertation, techniques and architectures have been developed to design and manage these multi-processor based systems efficiently. The dissertation presents predictable architectural components for MPSoCs, a Predictable MPSoC design strategy, automatic platform synthesis tool, a run-time system and an MPSoC simulation technique. The introduction of predictability helps in rapid design of MPSoC platforms. Chapter 1 of the thesis studies the trends in modern multimedia applications and processor architectures. The chapter further highlights the problems in the design of MPSoC platforms and emphasizes the need of predictable design techniques. Predictable design techniques require predictable application and architectural components. The chapter further elaborates on Synchronous Data Flow Graphs which are used to model the applications throughout this thesis. The chapter presents the architecture template used in this thesis and enlists the contributions of the thesis. One of the contributions of this thesis is the design of a predictable component called communication assist. Chapter 2 of the thesis describes the architecture of this communication assist. The communication assist presented in this thesis not only decouples the communication from computation but also provides timing guarantees. Based on this communication assist, an MPSoC platform generation technique has been presented that can design MPSoC platforms capable of satisfying the throughput constraints of multiple applications in all use-cases. The technique is presented in Chapter 3. The design strategy uses three simple steps for platform design. In the first step it finds the required number of processors. The second step minimizes the communication interconnect between the processors and the third step minimizes the communication memory requirement of the platform. Further in Chapter 4, a tool has been developed to generate CA-based platforms for FPGAs. The output of this tool can be used to synthesize platforms on real hardware with the help of FPGA synthesis tools. The applications executing on these platforms often exhibit dynamism e.g. variation in task execution times and change in application throughput requirements. Further, new applications may often be added by consumers at run-time. Resource managers have been presented in literature to handle such dynamic situations. However, the scalability of these resource managers becomes an issue with the increase in number of processors and applications. Chapter 5 presents distributed run-time resource management techniques. Two versions of distributed resource managers have been presented which are scalable with the number of applications and processors. MPSoC platforms for real-time applications are designed assuming worst-case task execution times. It is known that the difference between average-case and worst-case behaviour can be quite large. Therefore, knowing the average case performance is also important for the system designer, and software simulation is often employed to estimate this. However, simulation in software is slow and does not scale with the number of applications and processing elements. In Chapter 6, a fast and scalable simulation methodology is introduced that can simulate the execution of multiple applications on an MPSoC platform. It is based on parallel execution of SDF (Synchronous Data Flow) models of applications. The simulation methodology uses Parallel Discrete Event Simulation (PDES) primitives and it is termed as "Smart Conservative PDES". The methodology generates a parallel simulator which is synthesizable on FPGAs. The framework can also be used to model dynamic arbitration policies which are difficult to analyse using models. The generated platform is also useful in carrying out Design Space Exploration as shown in the thesis. Finally, Chapter 7 summarizes the main findings and (practical) implications of the studies described in previous chapters of this dissertation. Using the contributions mentioned in the thesis, a designer can design and implement predictable multiprocessor based systems capable of satisfying throughput constraints of multiple applications in given set of use-cases, and employ resource management strategies to deal with dynamism in the applications. The chapter also describes the main limitations of this dissertation and makes suggestions for future research

    Analysis and Design of High Speed Serial Interfaces for Automotive Applications

    Get PDF
    The demand for an enriched end-user experience and increased performance in next generation electronic applications is never ending, and it is a common trend for a wide spectrum of applications owing to different markets, like computing, mobile communication and automotive. For this reason High Speed Serial Interface have become widespread components for nowadays electronics with a constant demand for power reduction and data rate increase. In the frame of gigabit serial systems, the work discussed in this thesis develops in two directions: on one hand, the aim is to support the continuous data rate increase with the development of novel link modeling approaches that will be employed for system level evaluation and as support in the design and characterization phases. On the other hand, the design considerations and challenges in the implementation of the transmitter, one of the most delicate blocks for the signal integrity performance of the link, are central. The first part of the activity regarding link performance predictions lead to the development of an enhanced statistical simulation approach, capable to account for the transmitter waveform shape in the ISI analysis, a characteristic that is missed by the available state-ofthe- art simulation approaches. The proposed approach has been extensively tested by comparison with traditional simulation approaches (Spice-like simulators) and validated against experimental characterization of a test system, with satisfactory results. The second part of the activity consists in the design of a high speed transmitter in a deeply scaled CMOS technology, spanning from the concept of the circuit, its implementation and characterization. Targets of the design are to achieve a data rate of 5 Gb/s with a minimum voltage swing of 800 mV, thus doubling the data rate of the current transmitter implementation, and reduce the power dissipation adopting a voltage mode architecture. The experimental characterization of the fabricated lot draws a twofold picture, with some of the performance figures showing a very good qualitative and quantitative agreement with pre-silicon simulations, and others revealing a poor performance level, especially for the eye diagram. Investigation of the root causes by the analysis of the physical silicon design, of the bonding scheme of the prototypes and of the pre-silicon simulations is reported. Guidelines for the redesign of the circuit are also given.Nel panorama delle applicazioni elettroniche il miglioramento delle performance di un prodotto da una generazione alla successiva ha lo scopo di offrire all\u2019utilizzatore finale nuove funzioni e migliorare quelle esistenti. Negli ultimi anni grazie al costante avanzamento della tecnologia integrata, si \ue8 assistito ad un enorme sviluppo della capacit\ue0 computazionale dei dispositivi in tutti i segmenti di mercato, quali ad esempio l\u2019information technology, la comunicazione mobile e l\u2019automotive. La conseguente necessit\ue0 di mettere in comunicazione dispostivi diversi all\u2019interno della stessa applicazione e di traferire grosse quantit\ue0 di dati ha provocato una capillare diffusione delle interfacce seriali ad alta velocit\ue0, o High Speed Serial Interfaces (HSSIs). La necessit\ue0 di ridurre il consumo di potenza e aumentare il bit rate per questo tipo di applicazioni \ue8 diventata dunque un ambito di ricerca di estremo interesse. Il lavoro discusso in questa tesi si colloca nell\u2019ambito della trasmissione di dati seriali a bit rate superiori ad 1Gb/s e si sviluppa in due direzioni: da un lato, a sostegno del continuo aumento del bit rate nelle nuove generazioni di interfacce, \ue8 stato affrontato lo sviluppo di nuovi approcci di modellazione del sistema, che possano essere impiegati nella valutazione delle prestazioni dell\u2019interfaccia e a supporto delle fasi di progettazione e di caratterizzazione. Dall\u2019altro lato, si \ue8 focalizzata l\u2019attenzione sulle sfide e sulle problematiche inerenti il progetto di uno dei blocchi pi\uf9 delicati per le prestazioni del sistema, il trasmettitore. La prima parte della tesi ha come oggetto lo sviluppo di un approccio di simulazione statistico innovativo, in grado di includere nell\u2019analisi degli effetti dell\u2019interferenza di intersimbolo anche la forma d\u2019onda prodotta all\u2019uscita del trasmettitore, una caratteristica che non \ue8 presente in altri approcci di simulazione proposti in letteratura. La tecnica proposta \ue8 ampiamente testata mediante il confronto con approcci di simulazione tradizionali (di tipo Spice) e mediante il confronto con la caratterizzazione sperimentale di un sistema di test, con risultati pienamente soddisfacenti. La seconda parte dell\u2019attivit\ue0 riguarda il progetto di un trasmettitore integrato high speed in tecnologia CMOS a 40nm e si estende dallo studio di fattibilit\ue0 del circuito fino alla sua realizzazione e caratterizzazione. Gli obiettivi riguardano il raggiungimento di un bit rate pari a 5 Gb/s, raddoppiando cos\uec il bit rate dell\u2019attuale implementazione, e di una tensione differenziale di uscita minima di 800mV (picco-picco) riducendo allo stesso tempo la potenza dissipata mediante l\u2019adozione di una architettura Voltage Mode. I risultati sperimentali ottenuti dal primo lotto fabbricato non delineano un quadro univoco: alcune performance mostrano un ottimo accordo qualitativo e quantitativo con le simulazioni pre-fabbricazione, mentre prestazioni non soddisfacenti sono state ottenute in particolare per il diagramma ad occhio. Grazie all\u2019analisi del layout del prototipo, del bonding tra silicio e package e delle simulazioni pre-fabbricazione \ue8 stato possibile risalire ai fattori responsabili del degrado delle prestazioni rispetto alla previsioni pre-fabbricazione, permettendo inoltre di delineare le linee guida da seguire nella futura progettazione di un nuovo prototipo

    Vector coprocessor sharing techniques for multicores: performance and energy gains

    Get PDF
    Vector Processors (VPs) created the breakthroughs needed for the emergence of computational science many years ago. All commercial computing architectures on the market today contain some form of vector or SIMD processing. Many high-performance and embedded applications, often dealing with streams of data, cannot efficiently utilize dedicated vector processors for various reasons: limited percentage of sustained vector code due to substantial flow control; inherent small parallelism or the frequent involvement of operating system tasks; varying vector length across applications or within a single application; data dependencies within short sequences of instructions, a problem further exacerbated without loop unrolling or other compiler optimization techniques. Additionally, existing rigid SIMD architectures cannot tolerate efficiently dynamic application environments with many cores that may require the runtime adjustment of assigned vector resources in order to operate at desired energy/performance levels. To simultaneously alleviate these drawbacks of rigid lane-based VP architectures, while also releasing on-chip real estate for other important design choices, the first part of this research proposes three architectural contexts for the implementation of a shared vector coprocessor in multicore processors. Sharing an expensive resource among multiple cores increases the efficiency of the functional units and the overall system throughput. The second part of the dissertation regards the evaluation and characterization of the three proposed shared vector architectures from the performance and power perspectives on an FPGA (Field-Programmable Gate Array) prototype. The third part of this work introduces performance and power estimation models based on observations deduced from the experimental results. The results show the opportunity to adaptively adjust the number of vector lanes assigned to individual cores or processing threads in order to minimize various energy-performance metrics on modern vector- capable multicore processors that run applications with dynamic workloads. Therefore, the fourth part of this research focuses on the development of a fine-to-coarse grain power management technique and a relevant adaptive hardware/software infrastructure which dynamically adjusts the assigned VP resources (number of vector lanes) in order to minimize the energy consumption for applications with dynamic workloads. In order to remove the inherent limitations imposed by FPGA technologies, the fifth part of this work consists of implementing an ASIC (Application Specific Integrated Circuit) version of the shared VP towards precise performance-energy studies involving high- performance vector processing in multicore environments

    Architectural Solutions for NanoMagnet Logic

    Get PDF
    The successful era of CMOS technology is coming to an end. The limit on minimum fabrication dimensions of transistors and the increasing leakage power hinder the technological scaling that has characterized the last decades. In several different ways, this problem has been addressed changing the architectures implemented in CMOS, adopting parallel processors and thus increasing the throughput at the same operating frequency. However, architectural alternatives cannot be the definitive answer to a continuous increase in performance dictated by Moore’s law. This problem must be addressed from a technological point of view. Several alternative technologies that could substitute CMOS in next years are currently under study. Among them, magnetic technologies such as NanoMagnet Logic (NML) are interesting because they do not dissipate any leakage power. More- over, magnets have memory capability, so it is possible to merge logic and memory in the same device. However, magnetic circuits, and NML in this specific research, have also some important drawbacks that need to be addressed: first, the circuit clock frequency is limited to 100 MHz, to avoid errors in data propagation; second, there is a connection between circuit layout and timing, and in particular, longer wires will have longer latency. These drawbacks are intrinsic to the technology and for this reason they cannot be avoided. The only chance is to limit their impact from an architectural point of view. The first step followed in the research path of this thesis is indeed the choice and optimization of architectures able to deal with the problems of NML. Systolic Ar- rays are identified as an ideal solution for this technology, because they are regular structures with local interconnections that limit the long latency of wires; more- over they are composed of several Processing Elements that work in parallel, thus exploit parallelization to increase throughput (limiting the impact of the low clock frequency). Through the analysis of Systolic Arrays for NML, several possible im- provements have been identified and addressed: 1) it has been defined a rigorous way to increase throughput with interleaving, providing equations that allow to esti- mate the number of operations to be interleaved and the rules to provide inputs; 2) a latency insensitive circuit has been designed, that exploits a data communication protocol between processing elements to avoid data synchronization problems. This feature has been exploited to design a latency insensitive Systolic Array that is able to execute the Floyd-Steinberg dithering algorithm. All the improvements presented in this framework apply to Systolic Arrays implemented in any technology. So, they can also be exploited to increase performance of today’s CMOS parallel circuits. This research path is presented in Chapter 3. While Systolic Arrays are an interesting solution for NML, their usage could be quite limited because they are normally application-specific. The second re- search path addresses this problem. A Reconfigurable Systolic Array is presented, that can be programmed to execute several algorithms. This architecture has been tested implementing many algorithms, including FIR and IIR filters, Discrete Cosine Transform and Matrix Multiplication. This research path is presented in Chapter 4. In common Von Neumann architectures, the logic part of the circuit and the memory one are separated. Today bus communication between logic and memory represents the bottleneck of the system. This problem is addressed presenting Logic- In-Memory (LIM), an architecture where memory elements are merged in logic ones. This research path aims at defining a real LIM architectures. This has been done in two steps. The first step is represented by an architecture composed of three layers: memory, routing and logic. In the second step instead the routing plane is no more present, and its features are inherited by the memory plane. In this solution, a pyramidal memory model is used, where memories near logic elements contain the most probably used data, and other memory layers contain the remaining data and instruction set. This circuit has been tested with odd-even sort algorithms and it has been benchmarked against GPUs and ASIC. This research path is presented in Chapter 5. MagnetoElastic NML (ME-NML) is a technological improvement of the NML principle, proposed by researchers of Politecnico di Torino, where the clock system is based on the induced stretch of a piezoelectric substrate when a voltage is ap- plied to its boundaries. The main advantage of this solution is that it consumes much less power than the classic clock implementation. This technology has not yet been investigated from an architectural point of view and considering complex circuits. In this research field, a standard methodology for the design of ME-NML circuits has been proposed. It is based on a Standard Cell Library and an enhanced VHDL model. The effectiveness of this methodology has been proved designing a Galois Field Multiplier. Moreover the serial-parallel trade-off in ME-NML has been investigated, designing three different solutions for the Multiply and Accumulate structure. This research path is presented in Chapter 6. While ME-NML is an extremely interesting technology, it needs to be combined with other faster technologies to have a real competitive system. Signal interfaces between NML and other technologies (mainly CMOS) have been rarely presented in literature. A mixed-technology multiplexer is designed and presented as the basis for a CMOS to NML interface. The reverse interface (from ME-NML to CMOS) is instead based on a sensing circuit for the Faraday effect: a change in the polarization of a magnet induces an electric field that can be used to generate an input signal for a CMOS circuit. This research path is presented in Chapter 7. The research work presented in this thesis represents a fundamental milestone in the path towards nanotechnologies. The most important achievement is the de- sign and simulation of complex circuits with NML, benchmarking this technology with real application examples. The characterization of a technology considering complex functions is a major step to be performed and that has not yet been ad- dressed in literature for NML. Indeed, only in this way it is possible to intercept in advance any weakness of NanoMagnet Logic that cannot be discovered consid- ering only small circuits. Moreover, the architectural improvements introduced in this thesis, although technology-driven, can be actually applied to any technology. We have demonstrated the advantages that can derive applying them to CMOS cir- cuits. This thesis represents therefore a major step in two directions: the first is the enhancement of NML technology; the second is a general improvement of parallel architectures and the development of the new Logic-In-Memory paradigm

    A Flexible, Low-Power, Programmable Unsupervised Neural Network Based on Microcontrollers for Medical Applications

    Get PDF
    We present an implementation and laboratory tests of a winner takes all (WTA) artificial neural network (NN) on two microcontrollers (ÎŒC) with the ARM Cortex M3 and the AVR cores. The prospective application of this device is in wireless body sensor network (WBSN) in an on-line analysis of electrocardiograph (ECG) and electromyograph (EMG) biomedical signals. The proposed device will be used as a base station in the WBSN, acquiring and analysing the signals from the sensors placed on the human body. The proposed system is equiped with an analog-todigital converter (ADC), and allows for multi-channel acquisition of analog signals, preprocessing (filtering) and further analysis

    FlexWAFE - eine Architektur fĂŒr rekonfigurierbare-Bildverarbeitungssysteme

    Get PDF
    Recently there has been an increase in demand for high-resolution digital media content in both cinema and television industries. Currently existing equipment does not meet the requirements, or is too costly. New hardware systems and new programming techniques are needed in order to meet the high-resolution, high-quality, image requirements and reduce costs. The industry seeks a flexible architecture capable of running multiple applications on top of standard off-the-shelf components, with reduced development time. Until now, standard practice has been to develop specialized architectures and systems that target a single application. This has little flexibility and leads to high developments costs, every new application is designed almost from scratch. Our focus was to develop an architecture that is suited to image stream processing and has the flexibility to run multiple applications using the same FPGA-based hardware platform. The novelty in our approach is that we reconfigure parts of the architecture at run-time, but without incurring in the time and added constraints penalty of FPGA-partial-reconfiguration techniques. The architecture uses a hierarchical control structure that is well suited to parallel processing, and allows single cycle latency reconfiguration of parts of the processing pipeline. This is achieved using relatively little resources for the distributed control structures. To test the developed architecture a complex film-grain noise reduction algorithm was implemented on an off-the-shelf hardware platform developed by Thomson-Grass Valley. The system meet all the requirements and had very little load on the hierarchical control structures, there is growth headroom for much complexer control demands. The architecture has been ported to other hardware platforms, and other applications have been implemented as well. The run-time reconfigurability has proven to be a key factor in the success of the FlexWAFE.KĂŒrzlich gab es eine Zunahme der Nachfrage nach hochauflösenden digitalen Medieninhalten in den Kino- und Fernsehenindustrien. Derzeit vorhandene Systeme entsprechen nicht den Anforderungen, oder sind zu teuer. Neue Hardware-Systeme und neuer Programmiertechniken sind erforderlich, um den hochauflösenden, hochwertigen, Bildanforderungen zu genĂŒgen und Kosten zu verringern. Die Industrie sucht eine flexible Architektur zur AusfĂŒhrung mehrerer Anwendungen auf Standard-Komponenten, mit reduzierten Entwicklungszeiten. Bis jetzt ist gĂ€ngige Praxis, spezialisierten Architektur und Systeme zu entwickeln, die eine einzelne Anwendung zielen. Dieses hat wenig FlexibilitĂ€t und fĂŒhrt zu hohe Entwicklungskosten, jede neue Anwendung ist fast von Grund auf neu konzipiert. Unser Fokus war es, eine fĂŒr Bild Verarbeitung geeignet Architektur zu entwickeln dass die FlexibilitĂ€t hat mehrere Anwendungen an dieselbe FPGA-basierte Hardware-Plattform zu laufen. Die Neuheit in unserem Ansatz ist, dass wir Teile der Architektur zur Laufzeit rekonfigurieren, aber, ohne das Zeit und constraints strafe von FPGA Partielle-Rekonfiguration-Techniken. Die Architektur verwendet eine hierarchische Kontrollstruktur, die zur parallel Verarbeitung gut geeignet ist, und Single-Cycle-Latenz Rekonfiguration von Teilen der Verarbeitungs-Pipeline ermöglicht. Dieses wird unter Verwendung relativ weniger Ressourcen fĂŒr die verteiltes Steuerung Strukturen erzielt. Um das entwickelte Architektur zu testen ein komplexer Film-Korn-RauschunterdrĂŒckung Algorithmus wurde auf einer von Thomson-Grass Valley entwickelt standard Hardware-Plattform umgesetzt. Das System erfĂŒllt alle Anforderungen und hatte sehr wenig Last auf den hierarchischen Kontrollstrukturen, es gibt viel Wachstum Spielraum fĂŒr viel kompliziertere Steuerunganforderungen. Die Architektur ist zu anderen Hardwareplattformen portiert worden, und andere Anwendungen wurden ebenfalls implementiert. Der Laufzeitreconfigurability ist ein SchlĂŒsselfaktor im Erfolg des FlexWAFE gewesen

    Nova combinação de hardware e de software para veículos de desporto automóvel baseada no processamento directo de funçÔes gråficas

    Get PDF
    Doutoramento em Engenharia EletrĂłnicaThe main motivation for the work presented here began with previously conducted experiments with a programming concept at the time named "Macro". These experiments led to the conviction that it would be possible to build a system of engine control from scratch, which could eliminate many of the current problems of engine management systems in a direct and intrinsic way. It was also hoped that it would minimize the full range of software and hardware needed to make a final and fully functional system. Initially, this paper proposes to make a comprehensive survey of the state of the art in the specific area of software and corresponding hardware of automotive tools and automotive ECUs. Problems arising from such software will be identified, and it will be clear that practically all of these problems stem directly or indirectly from the fact that we continue to make comprehensive use of extremely long and complex "tool chains". Similarly, in the hardware, it will be argued that the problems stem from the extreme complexity and inter-dependency inside processor architectures. The conclusions are presented through an extensive list of "pitfalls" which will be thoroughly enumerated, identified and characterized. Solutions will also be proposed for the various current issues and for the implementation of these same solutions. All this final work will be part of a "proof-of-concept" system called "ECU2010". The central element of this system is the before mentioned "Macro" concept, which is an graphical block representing one of many operations required in a automotive system having arithmetic, logic, filtering, integration, multiplexing functions among others. The end result of the proposed work is a single tool, fully integrated, enabling the development and management of the entire system in one simple visual interface. Part of the presented result relies on a hardware platform fully adapted to the software, as well as enabling high flexibility and scalability in addition to using exactly the same technology for ECU, data logger and peripherals alike. Current systems rely on a mostly evolutionary path, only allowing online calibration of parameters, but never the online alteration of their own automotive functionality algorithms. By contrast, the system developed and described in this thesis had the advantage of following a "clean-slate" approach, whereby everything could be rethought globally. In the end, out of all the system characteristics, "LIVE-Prototyping" is the most relevant feature, allowing the adjustment of automotive algorithms (eg. Injection, ignition, lambda control, etc.) 100% online, keeping the engine constantly working, without ever having to stop or reboot to make such changes. This consequently eliminates any "turnaround delay" typically present in current automotive systems, thereby enhancing the efficiency and handling of such systems.A principal motivação para o trabalho que conduziu a esta tese residiu na constatação de que os actuais mĂ©todos de modelação de centralinas automĂłveis conduzem a significativos problemas de desenvolvimento e manutenção. Como resultado dessa constatação, o objectivo deste trabalho centrou-se no desenvolvimento de um conceito de arquitectura que rompe radicalmente com os modelos state-of-the-art e que assenta num conjunto de conceitos que vieram a ser designados de "Macro" e "Celular ECU". Com este modelo pretendeu-se simultaneamente minimizar a panĂłplia de software e de hardware necessĂĄrios Ă  obtenção de uma sistema funcional final. Inicialmente, esta tese propĂ”em-se fazer um levantamento exaustivo do estado da arte na ĂĄrea especĂ­fica do software e correspondente hardware das ferramentas e centralinas automĂłveis. Os problemas decorrentes de tal software serĂŁo identificados e, dessa identificação deverĂĄ ficar claro, que praticamente todos esses problemas tĂȘm origem directa ou indirecta no facto de se continuar a fazer um uso exaustivo de "tool chains" extremamente compridas e complexas. De forma semelhante, no hardware, os problemas tĂȘm origem na extrema complexidade e inter-dependĂȘncia das arquitecturas dos processadores. As consequĂȘncias distribuem-se por uma extensa lista de "pitfalls" que tambĂ©m serĂŁo exaustivamente enumeradas, identificadas e caracterizadas. SĂŁo ainda propostas soluçÔes para os diversos problemas actuais e correspondentes implementaçÔes dessas mesmas soluçÔes. Todo este trabalho final faz parte de um sistema "proof-of-concept" designado "ECU2010". O elemento central deste sistema Ă© o jĂĄ referido conceito de “Macro”, que consiste num bloco grĂĄfico que representa uma de muitas operaçÔes necessĂĄrias num sistema automĂłvel, como sejam funçÔes aritmĂ©ticas, lĂłgicas, de filtragem, de integração, de multiplexagem, entre outras. O resultado final do trabalho proposto assenta numa Ășnica ferramenta, totalmente integrada que permite o desenvolvimento e gestĂŁo de todo o sistema de forma simples numa Ășnica interface visual. Parte do resultado apresentado assenta numa plataforma hardware totalmente adaptada ao software, bem como na elevada flexibilidade e escalabilidade, para alĂ©m de permitir a utilização de exactamente a mesma tecnologia quer para a centralina, como para o datalogger e para os perifĂ©ricos. Os sistemas actuais assentam num percurso maioritariamente evolutivo, apenas permitindo a calibração online de parĂąmetros, mas nunca a alteração online dos prĂłprios algoritmos das funcionalidades automĂłveis. Pelo contrĂĄrio, o sistema desenvolvido e descrito nesta tese apresenta a vantagem de seguir um "clean-slate approach", pelo que tudo pode ser globalmente repensado. No final e para alĂ©m de todas as restantes caracterĂ­sticas, o “LIVE-PROTOTYPING” Ă© a funcionalidade mais relevante, ao permitir alterar algoritmos automĂłveis (ex: injecção, ignição, controlo lambda, etc.) de forma 100% online, mantendo o motor constantemente a trabalhar e sem nunca ter de o parar ou re-arrancar para efectuar tais alteraçÔes. Isto elimina consequentemente qualquer "turnaround delay" tipicamente presente em qualquer sistema automĂłvel actual, aumentando de forma significativa a eficiĂȘncia global do sistema e da sua utilização
    corecore