8 research outputs found

    Black Box Model based Self Healing Solution for Stuck at Faults in Digital Circuits

    Get PDF
    The paper proposes a design strategy to retain the true nature of the output in the event of occurrence of stuck at faults at the interconnect levels of digital circuits. The procedure endeavours to design a combinational architecture which includes attributes to identify stuck at faults present in the intermediate lines and involves a healing mechanism to redress the same. The simulated fault injection procedure introduces both single as well as multiple stuck-at faults at the interconnect levels of a two level combinational circuit in accordance with the directives of a control signal. The inherent heal facility attached to the formulation enables to reach out the fault free output even in the presence of faults. The Modelsim based simulation results obtained for the Circuit Under Test [CUT] implemented using a Read Only Memory [ROM], proclaim the ability of the system to survive itself from the influence of faults. The comparison made with the traditional Triple Modular Redundancy [TMR] exhibits the superiority of the scheme in terms of fault coverage and area overhead. 

    Comparison of Fault Simulation Over Custom Kernel Module Using Various Techniques

    Get PDF
    To test the behavior of the Linux kernel module, device drivers and file system in a faulty situation, scientists tried to inject faults in different artificial environments. Since the rarity and unpredictability of such events are pretty high, thus the localization and detection of Linux kernel, device drivers, file system modules errors become unfathomable. ‘Artificial introduction of some random faults during normal tests’ is the only known approach to such mystifying problems. A standard method for performing such experiments is to generate synthetic faults and study the effects. Various fault injection frameworks have been analyzed over the Linux kernel to simulate such detection. The following paper highlights the comparison of different approaches and techniques used for such fault injection to test Linux kernel modules that include simulating low resource conditions and detecting memory leaks. The frameworks chosen to be used in these experiments are; Linux Text Project (LTP), KEDR, Linux Fault-Injection (LFI), and SCSI.&nbsp

    FIMSIM: A fault injection infrastructure for microarchitectural simulators

    Get PDF
    Fault injection is a widely used approach for experiment-based dependability evaluation in which faults can be injected to the hardware, to the simulator or to the software. Simulation based fault injection is more appealing for researchers, since it can be utilized at the early design stage of the processor. As such, it enables a preliminary analysis of the correlation between the criticality of circuit level faults and their impact on applications. However, the lack of publicly available fault injectors for microarchitecture level simulators brings extra burden of designing and implementing fault injectors to the researchers who evaluate microarchitecture dependability. In this study, we present FIMSIM, to the best of our knowledge, the first publicly available fault injection simulator at the microarchitecture level. FIMSIM is a compact tool which is capable of injecting transient, permanent, intermittent and multi-bit faults. Therefore, FIMSIM provides the opportunity to comprehensively evaluate the vulnerability of different microarchitectural structures against different fault models.Postprint (published version

    Enhancement of fault injection techniques based on the modification of VHDL code

    Full text link
    Deep submicrometer devices are expected to be increasingly sensitive to physical faults. For this reason, fault-tolerance mechanisms are more and more required in VLSI circuits. So, validating their dependability is a prior concern in the design process. Fault injection techniques based on the use of hardware description languages offer important advantages with regard to other techniques. First, as this type of techniques can be applied during the design phase of the system, they permit reducing the time-to-market. Second, they present high controllability and reachability. Among the different techniques, those based on the use of saboteurs and mutants are especially attractive due to their high fault modeling capability. However, implementing automatically these techniques in a fault injection tool is difficult. Especially complex are the insertion of saboteurs and the generation of mutants. In this paper, we present new proposals to implement saboteurs and mutants for models in VHDL which are easy-to-automate, and whose philosophy can be generalized to other hardware description languages.Baraza Calvo, JC.; Gracia-Morán, J.; Blanc Clavero, S.; Gil Tomás, DA.; Gil Vicente, PJ. (2008). Enhancement of fault injection techniques based on the modification of VHDL code. IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 16(6):693-706. doi:10.1109/TVLSI.2008.2000254S69370616

    Simulating the effects of logic faults in implementation-level VITAL-compliant models

    Full text link
    [EN] Simulation-based fault injection is a well-known technique to assess the dependability of hardware designs specified using hardware description languages (HDL). Although logic faults are usually introduced in models defined at the register transfer level (RTL), most accurate results can be obtained by considering implementation-level ones, which reflect the actual structure and timing of the circuit. These models consist of a list of interconnected technology-specific components (macrocells), provided by vendors and annotated with post-place-and-route delays. Macrocells described in the very high speed integrated circuit HDL (VHDL) should also comply with the VHDL initiative towards application specific integrated circuit libraries (VITAL) standard to be interoperable across standard simulators. However, the rigid architecture imposed by VITAL makes that fault injection procedures applied at RTL cannot be used straightforwardly. This work identifies a set of generic operations on VITAL-compliant macrocells that are later used to define how to accurately simulate the effects of common logic fault models. The generality of this proposal is supported by the definition of a platform-specific fault procedure based on these operations. Three embedded processors, implemented using the Xilinx¿s toolchain and SIMPRIM library of macrocells, are considered as a case study, which exposes the gap existing between the robustness assessment at both RTL and implementation-level.This work has been partially funded by the Ministerio de Economia, Industria y Competitividad of Spain under grant agreement no TIN2016-81075-R, and the "Programa de Ayudas de Investigacion y Desarrollo" (PAID) of Universitat Politecnica de Valencia.Tuzov, I.; De-Andrés-Martínez, D.; Ruiz, JC. (2019). Simulating the effects of logic faults in implementation-level VITAL-compliant models. Computing. 101(2):77-96. https://doi.org/10.1007/s00607-018-0651-4S77961012Baraza JC, Gracia J, Blanc S, Gil D, Gil P (2008) Enhancement of fault injection techniques based on the modification of vhdl code. IEEE Tran Very Large Scale Integr Syst 16:693–706Baraza JC, Gracia J, Gil D, Gil P (2002) A prototype of a vhdl-based fault injection tool: description and application. Journal of Systems Architecture 47(10):847–867Benites LAC, Kastensmidt FL (2017) Fault injection methodology for single event effects on clock-gated asics. In: IEEE Latin American test symposium. IEEE, pp 1–4Benso A, Prinetto P (2003) Fault injection techniques and tools for VLSI reliability evaluation. Frontiers in electronic testing. Kluwer Academic Publishers, BerlinCobham Gaisler AB: LEON3 processor product sheet (2016). https://www.gaisler.com/doc/leon3_product_sheet.pdfCohen B (2012) VHDL coding styles and methodologies. Springer, New YorkDas SR, Mukherjee S, Petriu EM, Assaf MH, Sahinoglu M, Jone WB (2006) An improved fault simulation approach based on verilog with application to ISCAS benchmark circuits. In: IEEE instrumentation and measurement technology conference, pp 1902–1907Fernandez V, Sanchez P, Garcia M, Villar E (1994) Fault modeling and injection in VITAL descriptions. In: Third annual Atlantic test workshop, pp o1–o4Gil D, Gracia J, Baraza JC, Gil P (2003) Study, comparison and application of different vhdl-based fault injection techniques for the experimental validation of a fault-tolerant system. J Syst Archit 34(1):41–51Gil P, Arlat J, Madeira H, Crouzet Y, Jarboui T, Kanoun K, Marteau T, Duraes J, Vieira M, Gil D, Baraza JC, Gracia J (2002) Fault representativeness. Technical report, dependability benchmarking projectGuthaus MR, Ringenberg JS, Ernst D, Austin TM, Mudge T, Brown RB (2001) MiBench: a free, commercially representative embedded benchmark suite. In: IEEE 4th annual workshop on workload characterization, pp 3–14IEEE Standard for VITAL ASIC (Application Specific Integrated Circuit) (2000) Modeling specification. Institute of Electrical and Electronic Engineers, StandardIEEE Standard VHDL Language Reference Manual (2008) Institute of Electrical and Electronic Engineers, StandardIEEE Standard for Standard Delay Format (SDF) for the Electronic Design Process. Institute of Electrical and Electronic Engineers, Standard (2001)Jenn E, Arlat J, Rimen M, Ohlsson J, Karlsson J (1994) Fault injection into VHDL models: the MEFISTO tool. In: International symposium on fault-tolerant computing, pp 66–75Kochte MA, Schaal M, Wunderlich HJ, Zoellin CG (2010) Efficient fault simulation on many-core processors. In: Design automation conference, pp 380–385Mansour W, Velazco R (2013) An automated seu fault-injection method and tool for HDL-based designs. IEEE Trans Nucl Sci 60(4):2728–2733Mentor Graphics (2016) Questa SIM command reference manual 10.7b, Document Revision 3.5. https://www.mentor.com/products/fv/modelsim/Munden R (2000) Inverter, STDN library. Free model foundry VHDL model list. https://freemodelfoundry.com/fmf_models/stnd/std04.vhdMunden R (2004) ASIC and FPGA verification: a guide to component modeling. Systems on silicon. Elsevier, AmsterdamNa J, Lee D (2011) Simulated fault injection using simulator modification technique. ETRI J 33(1):50–59Nimara S, Amaricai A, Popa M (2015) Sub-threshold cmos circuits reliability assessment using simulated fault injection based on simulator commands. In: IEEE International Symposium on Applied Computational Intelligence and Informatics, pp 101–104Oregano Systems GmbH (2013) MC8051 IP Core, user guide (V 1.2) 2013. http://www.oreganosystems.at/download/mc8051_ug.pdfRomani E (1998) Structural PIC165X microcontroller. Hamburg VHDL archive. https://tams-www.informatik.uni-hamburg.de/vhdlShaw D, Al-Khalili D, Rozon C (2006) Automatic generation of defect injectable VHDL fault models for ASIC standard cell libraries. Integr VLSI J 39(4):382–406Shaw DB, Al-Khalili D (2003) IC bridge fault modeling for IP blocks using neural network-based VHDL saboteurs. IEEE Trans Comput 10:1285–1297Short KL (2008) VHDL for engineers, 1st edn. Pearson, LondonSieh V, Tschache O, Balbach F (1997) Verify: evaluation of reliability using VHDL-models with embedded fault descriptions. In: International symposium on fault-tolerant computing, pp 32–36Singh L, Drucker L (2004) Advanced verification techniques. Frontiers in electronic testing. Springer, New YorkTuzov I, de Andrés D, Ruiz JC (2017) Dependability-aware design space exploration for optimal synthesis parameters tuning. In: IEEE/IFIP international conference on dependable systems and networks, pp 1–12Tuzov I, de Andrés D, Ruiz JC (2017) Robustness assessment via simulation-based fault injection of the implementation level models of the LEON3, MC8051, and PIC microcontrollers in presence of stuck-at, bit-flip, pulse, and delay fault models [Data set], Zenodo. https://doi.org/10.5281/zenodo.891316Tuzov I, de Andrés D, Ruiz JC (2018) DAVOS: EDA toolkit for dependability assessment, verification, optimization and selection of hardware models. In: IEEE/IFIP international conference on dependable systems and networks, pp 322–329Tuzov I, Ruiz JC, de Andrés D (2017) Accurately simulating the effects of faults in VHDL models described at the implementation-level. In: European dependable computing conference, pp 10–17Wang LT, Chang YW, Cheng KT (2009) Electronic design automation: synthesis, verification, and test. Morgan Kaufmann, BurlingtonXilinx: Synthesis and simulation design guide, UG626 (v14.4) (2012). https://www.xilinx.com/support/documentation/sw_manuals/xilinx14_7/sim.pd

    Dependability-driven Strategies to Improve the Design and Verification of Safety-Critical HDL-based Embedded Systems

    Full text link
    [ES] La utilización de sistemas empotrados en cada vez más ámbitos de aplicación está llevando a que su diseño deba enfrentarse a mayores requisitos de rendimiento, consumo de energía y área (PPA). Asimismo, su utilización en aplicaciones críticas provoca que deban cumplir con estrictos requisitos de confiabilidad para garantizar su correcto funcionamiento durante períodos prolongados de tiempo. En particular, el uso de dispositivos lógicos programables de tipo FPGA es un gran desafío desde la perspectiva de la confiabilidad, ya que estos dispositivos son muy sensibles a la radiación. Por todo ello, la confiabilidad debe considerarse como uno de los criterios principales para la toma de decisiones a lo largo del todo flujo de diseño, que debe complementarse con diversos procesos que permitan alcanzar estrictos requisitos de confiabilidad. Primero, la evaluación de la robustez del diseño permite identificar sus puntos débiles, guiando así la definición de mecanismos de tolerancia a fallos. Segundo, la eficacia de los mecanismos definidos debe validarse experimentalmente. Tercero, la evaluación comparativa de la confiabilidad permite a los diseñadores seleccionar los componentes prediseñados (IP), las tecnologías de implementación y las herramientas de diseño (EDA) más adecuadas desde la perspectiva de la confiabilidad. Por último, la exploración del espacio de diseño (DSE) permite configurar de manera óptima los componentes y las herramientas seleccionados, mejorando así la confiabilidad y las métricas PPA de la implementación resultante. Todos los procesos anteriormente mencionados se basan en técnicas de inyección de fallos para evaluar la robustez del sistema diseñado. A pesar de que existe una amplia variedad de técnicas de inyección de fallos, varias problemas aún deben abordarse para cubrir las necesidades planteadas en el flujo de diseño. Aquellas soluciones basadas en simulación (SBFI) deben adaptarse a los modelos de nivel de implementación, teniendo en cuenta la arquitectura de los diversos componentes de la tecnología utilizada. Las técnicas de inyección de fallos basadas en FPGAs (FFI) deben abordar problemas relacionados con la granularidad del análisis para poder localizar los puntos débiles del diseño. Otro desafío es la reducción del coste temporal de los experimentos de inyección de fallos. Debido a la alta complejidad de los diseños actuales, el tiempo experimental dedicado a la evaluación de la confiabilidad puede ser excesivo incluso en aquellos escenarios más simples, mientras que puede ser inviable en aquellos procesos relacionados con la evaluación de múltiples configuraciones alternativas del diseño. Por último, estos procesos orientados a la confiabilidad carecen de un soporte instrumental que permita cubrir el flujo de diseño con toda su variedad de lenguajes de descripción de hardware, tecnologías de implementación y herramientas de diseño. Esta tesis aborda los retos anteriormente mencionados con el fin de integrar, de manera eficaz, estos procesos orientados a la confiabilidad en el flujo de diseño. Primeramente, se proponen nuevos métodos de inyección de fallos que permiten una evaluación de la confiabilidad, precisa y detallada, en diferentes niveles del flujo de diseño. Segundo, se definen nuevas técnicas para la aceleración de los experimentos de inyección que mejoran su coste temporal. Tercero, se define dos estrategias DSE que permiten configurar de manera óptima (desde la perspectiva de la confiabilidad) los componentes IP y las herramientas EDA, con un coste experimental mínimo. Cuarto, se propone un kit de herramientas que automatiza e incorpora con eficacia los procesos orientados a la confiabilidad en el flujo de diseño semicustom. Finalmente, se demuestra la utilidad y eficacia de las propuestas mediante un caso de estudio en el que se implementan tres procesadores empotrados en un FPGA de Xilinx serie 7.[CA] La utilització de sistemes encastats en cada vegada més àmbits d'aplicació està portant al fet que el seu disseny haja d'enfrontar-se a majors requisits de rendiment, consum d'energia i àrea (PPA). Així mateix, la seua utilització en aplicacions crítiques provoca que hagen de complir amb estrictes requisits de confiabilitat per a garantir el seu correcte funcionament durant períodes prolongats de temps. En particular, l'ús de dispositius lògics programables de tipus FPGA és un gran desafiament des de la perspectiva de la confiabilitat, ja que aquests dispositius són molt sensibles a la radiació. Per tot això, la confiabilitat ha de considerar-se com un dels criteris principals per a la presa de decisions al llarg del tot flux de disseny, que ha de complementar-se amb diversos processos que permeten aconseguir estrictes requisits de confiabilitat. Primer, l'avaluació de la robustesa del disseny permet identificar els seus punts febles, guiant així la definició de mecanismes de tolerància a fallades. Segon, l'eficàcia dels mecanismes definits ha de validar-se experimentalment. Tercer, l'avaluació comparativa de la confiabilitat permet als dissenyadors seleccionar els components predissenyats (IP), les tecnologies d'implementació i les eines de disseny (EDA) més adequades des de la perspectiva de la confiabilitat. Finalment, l'exploració de l'espai de disseny (DSE) permet configurar de manera òptima els components i les eines seleccionats, millorant així la confiabilitat i les mètriques PPA de la implementació resultant. Tots els processos anteriorment esmentats es basen en tècniques d'injecció de fallades per a poder avaluar la robustesa del sistema dissenyat. A pesar que existeix una àmplia varietat de tècniques d'injecció de fallades, diverses problemes encara han d'abordar-se per a cobrir les necessitats plantejades en el flux de disseny. Aquelles solucions basades en simulació (SBFI) han d'adaptar-se als models de nivell d'implementació, tenint en compte l'arquitectura dels diversos components de la tecnologia utilitzada. Les tècniques d'injecció de fallades basades en FPGAs (FFI) han d'abordar problemes relacionats amb la granularitat de l'anàlisi per a poder localitzar els punts febles del disseny. Un altre desafiament és la reducció del cost temporal dels experiments d'injecció de fallades. A causa de l'alta complexitat dels dissenys actuals, el temps experimental dedicat a l'avaluació de la confiabilitat pot ser excessiu fins i tot en aquells escenaris més simples, mentre que pot ser inviable en aquells processos relacionats amb l'avaluació de múltiples configuracions alternatives del disseny. Finalment, aquests processos orientats a la confiabilitat manquen d'un suport instrumental que permeta cobrir el flux de disseny amb tota la seua varietat de llenguatges de descripció de maquinari, tecnologies d'implementació i eines de disseny. Aquesta tesi aborda els reptes anteriorment esmentats amb la finalitat d'integrar, de manera eficaç, aquests processos orientats a la confiabilitat en el flux de disseny. Primerament, es proposen nous mètodes d'injecció de fallades que permeten una avaluació de la confiabilitat, precisa i detallada, en diferents nivells del flux de disseny. Segon, es defineixen noves tècniques per a l'acceleració dels experiments d'injecció que milloren el seu cost temporal. Tercer, es defineix dues estratègies DSE que permeten configurar de manera òptima (des de la perspectiva de la confiabilitat) els components IP i les eines EDA, amb un cost experimental mínim. Quart, es proposa un kit d'eines (DAVOS) que automatitza i incorpora amb eficàcia els processos orientats a la confiabilitat en el flux de disseny semicustom. Finalment, es demostra la utilitat i eficàcia de les propostes mitjançant un cas d'estudi en el qual s'implementen tres processadors encastats en un FPGA de Xilinx serie 7.[EN] Embedded systems are steadily extending their application areas, dealing with increasing requirements in performance, power consumption, and area (PPA). Whenever embedded systems are used in safety-critical applications, they must also meet rigorous dependability requirements to guarantee their correct operation during an extended period of time. Meeting these requirements is especially challenging for those systems that are based on Field Programmable Gate Arrays (FPGAs), since they are very susceptible to Single Event Upsets. This leads to increased dependability threats, especially in harsh environments. In such a way, dependability should be considered as one of the primary criteria for decision making throughout the whole design flow, which should be complemented by several dependability-driven processes. First, dependability assessment quantifies the robustness of hardware designs against faults and identifies their weak points. Second, dependability-driven verification ensures the correctness and efficiency of fault mitigation mechanisms. Third, dependability benchmarking allows designers to select (from a dependability perspective) the most suitable IP cores, implementation technologies, and electronic design automation (EDA) tools. Finally, dependability-aware design space exploration (DSE) allows to optimally configure the selected IP cores and EDA tools to improve as much as possible the dependability and PPA features of resulting implementations. The aforementioned processes rely on fault injection testing to quantify the robustness of the designed systems. Despite nowadays there exists a wide variety of fault injection solutions, several important problems still should be addressed to better cover the needs of a dependability-driven design flow. In particular, simulation-based fault injection (SBFI) should be adapted to implementation-level HDL models to take into account the architecture of diverse logic primitives, while keeping the injection procedures generic and low-intrusive. Likewise, the granularity of FPGA-based fault injection (FFI) should be refined to the enable accurate identification of weak points in FPGA-based designs. Another important challenge, that dependability-driven processes face in practice, is the reduction of SBFI and FFI experimental effort. The high complexity of modern designs raises the experimental effort beyond the available time budgets, even in simple dependability assessment scenarios, and it becomes prohibitive in presence of alternative design configurations. Finally, dependability-driven processes lack an instrumental support covering the semicustom design flow in all its variety of description languages, implementation technologies, and EDA tools. Existing fault injection tools only partially cover the individual stages of the design flow, being usually specific to a particular design representation level and implementation technology. This work addresses the aforementioned challenges by efficiently integrating dependability-driven processes into the design flow. First, it proposes new SBFI and FFI approaches that enable an accurate and detailed dependability assessment at different levels of the design flow. Second, it improves the performance of dependability-driven processes by defining new techniques for accelerating SBFI and FFI experiments. Third, it defines two DSE strategies that enable the optimal dependability-aware tuning of IP cores and EDA tools, while reducing as much as possible the robustness evaluation effort. Fourth, it proposes a new toolkit (DAVOS) that automates and seamlessly integrates the aforementioned dependability-driven processes into the semicustom design flow. Finally, it illustrates the usefulness and efficiency of these proposals through a case study consisting of three soft-core embedded processors implemented on a Xilinx 7-series SoC FPGA.Tuzov, I. (2020). Dependability-driven Strategies to Improve the Design and Verification of Safety-Critical HDL-based Embedded Systems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/159883TESI

    Designs for increasing reliability while reducing energy and increasing lifetime

    Get PDF
    In the last decades, the computing technology experienced tremendous developments. For instance, transistors' feature size shrank to half at every two years as consistently from the first time Moore stated his law. Consequently, number of transistors and core count per chip doubles at each generation. Similarly, petascale systems that have the capability of processing more than one billion calculation per second have been developed. As a matter of fact, exascale systems are predicted to be available at year 2020. However, these developments in computer systems face a reliability wall. For instance, transistor feature sizes are getting so small that it becomes easier for high-energy particles to temporarily flip the state of a memory cell from 1-to-0 or 0-to-1. Also, even if we assume that fault-rate per transistor stays constant with scaling, the increase in total transistor and core count per chip will significantly increase the number of faults for future desktop and exascale systems. Moreover, circuit ageing is exacerbated due to increased manufacturing variability and thermal stresses, therefore, lifetime of processor structures are becoming shorter. On the other side, due to the limited power budget of the computer systems such that mobile devices, it is attractive to scale down the voltage. However, when the voltage level scales to beyond the safe margin especially to the ultra-low level, the error rate increases drastically. Nevertheless, new memory technologies such as NAND flashes present only limited amount of nominal lifetime, and when they exceed this lifetime, they can not guarantee storing of the data correctly leading to data retention problems. Due to these issues, reliability became a first-class design constraint for contemporary computing in addition to power and performance. Moreover, reliability even plays increasingly important role when computer systems process sensitive and life-critical information such as health records, financial information, power regulation, transportation, etc. In this thesis, we present several different reliability designs for detecting and correcting errors occurring in processor pipelines, L1 caches and non-volatile NAND flash memories due to various reasons. We design reliability solutions in order to serve three main purposes. Our first goal is to improve the reliability of computer systems by detecting and correcting random and non-predictable errors such as bit flips or ageing errors. Second, we aim to reduce the energy consumption of the computer systems by allowing them to operate reliably at ultra-low voltage level. Third, we target to increase the lifetime of new memory technologies by implementing efficient and low-cost reliability schemes

    Automatic Generation of Distributed Runtime Infrastructure for Internet of Things

    Get PDF
    Ph. D. ThesisThe Internet of Things (IoT) represents a network of connected devices that are able to cooperate and interact with each other in order to reach a particular goal. To attain this, the devices are equipped with identifying, sensing, networking and processing capabilities. Cloud computing, on the other hand, is the delivering of on-demand computing services – from applications, to storage, to processing power – typically over the internet. Clouds bring a number of advantages to distributed computing because of highly available pool of virtualized computing resource. Due to the large number of connected devices, real-world IoT use cases may generate overwhelmingly large amounts of data. This prompts the use of cloud resources for processing, storage and analysis of the data. Therefore, a typical IoT system comprises of a front-end (devices that collect and transmit data), and back-end – typically distributed Data Stream Management Systems (DSMSs) deployed on the cloud infrastructure, for data processing and analysis. Increasingly, new IoT devices are being manufactured to provide limited execution environment on top of their data sensing and transmitting capabilities. This consequently demands a change in the way data is being processed in a typical IoT-cloud setup. The traditional, centralised cloud-based data processing model – where IoT devices are used only for data collection – does not provide an efficient utilisation of all available resources. In addition, the fundamental requirements of real-time data processing such as short response time may not always be met. This prompts a new processing model which is based on decentralising the data processing tasks. The new decentralised architectural pattern allows some parts of data streaming computation to be executed directly on edge devices – closer to where the data is collected. Extending the processing capabilities to the IoT devices increases the robustness of applications as well as reduces the communication overhead between different components of an IoT system. However, this new pattern poses new challenges in the development, deployment and management of IoT applications. Firstly, there exists a large resource gap between the two parts of a typical IoT system (i.e. clouds and IoT devices); hence, prompting a new approach for IoT applications deployment and management. Secondly, the new decentralised approach necessitates the deployment of DSMS on distributed clusters of heterogeneous nodes resulting in unpredictable runtime performance and complex fault characteristics. Lastly, the environment where DSMSs are deployed is very dynamic due to user or device mobility, workload variation, and resource availability. In this thesis we present solutions to address the aforementioned challenges. We investigate how a high-level description of a data streaming computation can be used to automatically generate a distributed runtime infrastructure for Internet of Things. Subsequently, we develop a deployment and management system capable of distributing different operators of a data streaming computation onto different IoT gateway devices and cloud infrastructure. To address the other challenges, we propose a non-intrusive approach for performance evaluation of DSMSs and present a protocol and a set of algorithms for dynamic migration of stateful data stream operators. To improve our migration approach, we provide an optimisation technique which provides minimal application downtime and improves the accuracy of a data stream computation
    corecore