89 research outputs found

    Comparing the impact of power supply voltage on CMOS-and FinFET-based SRAMs in the presence of resistive defects

    Get PDF
    CMOS technology scaling has reached its limit at the 22 nm technology node due to several factors including Process Variations (PV), increased leakage current, Random Dopant Fluctuation (RDF), and mainly the Short-Channel Effect (SCE). In order to continue the miniaturization process via technology down-scaling while preserving system reliability and performance, Fin Field-Effect Transistors (FinFETs) arise as an alternative to CMOS transistors. In parallel, Static Random-Access Memories (SRAMs) increasingly occupy great part of Systems-on-Chips’ (SoCs) silicon area, making their reliability an important issue. SRAMs are designed to reach densities at the limit of the manufacturing process, making this component susceptible to manufacturing defects, including the resistive ones. Such defects may cause dynamic faults during the circuits’ lifetime, an important cause of test escape. Thus, the identification of the proper faulty behavior taking different operating conditions into account is considered crucial to guarantee the development of more suitable test methodologies. In this context, a comparison between the behavior of a 22 nm CMOS-based and a 20 nm FinFET-based SRAM in the presence of resistive defects is carried out considering different power supply voltages. In more detail, the behavior of defective cells operating under different power supply voltages has been investigated performing SPICE simulations. Results show that the power supply voltage plays an important role in the faulty behavior of both CMOS- and FinFET-based SRAM cells in the presence of resistive defects but demonstrate to be more expressive when considering the FinFET-based memories. Studying different operating temperatures, the results show an expressively higher occurrence of dynamic faults in FinFET-based SRAMs when compared to CMOS technology

    Random access memory testing : theory and practice : the gains of fault modelling

    Get PDF

    Ultra-low-power SRAM design in high variability advanced CMOS

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 163-181).Embedded SRAMs are a critical component in modern digital systems, and their role is preferentially increasing. As a result, SRAMs strongly impact the overall power, performance, and area, and, in order to manage these severely constrained trade-offs, they must be specially designed for target applications. Highly energy-constrained systems (e.g. implantable biomedical devices, multimedia handsets, etc.) are an important class of applications driving ultra-low-power SRAMs. This thesis analyzes the energy of an SRAM sub-array. Since supply- and threshold-voltage have a strong effect, targets for these are established in order to optimize energy. Despite the heavy emphasis on leakage-energy, analysis of a high-density 256x256 sub-array in 45nm LP CMOS points to two necessary optimizations: (1) aggressive supply-voltage reduction (in addition to Vt elevation), and (2) performance enhancement. Important SRAM metrics, including read/write/hold-margin and read-current, are also investigated to identify trade-offs of these optimizations. Based on the need to lower supply-voltage, a 0.35V 256kb SRAM is demonstrated in 65nm LP CMOS. It uses an 8T bit-cell with peripheral circuit-assists to improve write-margin and bit-line leakage. Additionally, redundancy, to manage the increasing impact of variability in the periphery, is proposed to improve the area-offset trade-off of sense-amplifiers, demonstrating promise for highly advanced technology nodes. Based on the need to improve performance, which is limited by density constraints, a 64kb SRAM, using an offset-compensating sense-amplifier, is demonstrated in 45nm LP CMOS with high-density 0.25[mu]m2 bit-cells.(cont.) The sense-amplifier is regenerative, but non -strobed, overcoming timing uncertainties limiting performance, and it is single-ended, for compatibility with 8T cells. Compared to a conventional strobed sense-amplifier, it achieves 34% improvement in worst-case access-time and 4x improvement in the standard deviation of the access-time.by Naveen Verma.Ph.D

    Test and Diagnosis of Integrated Circuits

    Get PDF
    The ever-increasing growth of the semiconductor market results in an increasing complexity of digital circuits. Smaller, faster, cheaper and low-power consumption are the main challenges in semiconductor industry. The reduction of transistor size and the latest packaging technology (i.e., System-On-a-Chip, System-In-Package, Trough Silicon Via 3D Integrated Circuits) allows the semiconductor industry to satisfy the latest challenges. Although producing such advanced circuits can benefit users, the manufacturing process is becoming finer and denser, making chips more prone to defects.The work presented in the HDR manuscript addresses the challenges of test and diagnosis of integrated circuits. It covers:- Power aware test;- Test of Low Power Devices;- Fault Diagnosis of digital circuits

    Conception et test des circuits et systèmes numériques à haute fiabilité et sécurité

    Get PDF
    Research activities I carried on after my nomination as Chargé de Recherche deal with the definition of methodologies and tools for the design, the test and the reliability of secure digital circuits and trustworthy manufacturing. More recently, we have started a new research activity on the test of 3D stacked Integrated CIrcuits, based on the use of Through Silicon Vias. Moreover, thanks to the relationships I have maintained after my post-doc in Italy, I have kept on cooperating with Politecnico di Torino on the topics related to test and reliability of memories and microprocessors.Secure and Trusted DevicesSecurity is a critical part of information and communication technologies and it is the necessary basis for obtaining confidentiality, authentication, and integrity of data. The importance of security is confirmed by the extremely high growth of the smart-card market in the last 20 years. It is reported in "Le monde Informatique" in the article "Computer Crime and Security Survey" in 2007 that financial losses due to attacks on "secure objects" in the digital world are greater than $11 Billions. Since the race among developers of these secure devices and attackers accelerates, also due to the heterogeneity of new systems and their number, the improvement of the resistance of such components becomes today’s major challenge.Concerning all the possible security threats, the vulnerability of electronic devices that implement cryptography functions (including smart cards, electronic passports) has become the Achille’s heel in the last decade. Indeed, even though recent crypto-algorithms have been proven resistant to cryptanalysis, certain fraudulent manipulations on the hardware implementing such algorithms can allow extracting confidential information. So-called Side-Channel Attacks have been the first type of attacks that target the physical device. They are based on information gathered from the physical implementation of a cryptosystem. For instance, by correlating the power consumed and the data manipulated by the device, it is possible to discover the secret encryption key. Nevertheless, this point is widely addressed and integrated circuit (IC) manufacturers have already developed different kinds of countermeasures.More recently, new threats have menaced secure devices and the security of the manufacturing process. A first issue is the trustworthiness of the manufacturing process. From one side, secure devices must assure a very high production quality in order not to leak confidential information due to a malfunctioning of the device. Therefore, possible defects due to manufacturing imperfections must be detected. This requires high-quality test procedures that rely on the use of test features that increases the controllability and the observability of inner points of the circuit. Unfortunately, this is harmful from a security point of view, and therefore the access to these test features must be protected from unauthorized users. Another harm is related to the possibility for an untrusted manufacturer to do malicious alterations to the design (for instance to bypass or to disable the security fence of the system). Nowadays, many steps of the production cycle of a circuit are outsourced. For economic reasons, the manufacturing process is often carried out by foundries located in foreign countries. The threat brought by so-called Hardware Trojan Horses, which was long considered theoretical, begins to materialize.A second issue is the hazard of faults that can appear during the circuit’s lifetime and that may affect the circuit behavior by way of soft errors or deliberate manipulations, called Fault Attacks. They can be based on the intentional modification of the circuit’s environment (e.g., applying extreme temperature, exposing the IC to radiation, X-rays, ultra-violet or visible light, or tampering with clock frequency) in such a way that the function implemented by the device generates an erroneous result. The attacker can discover secret information by comparing the erroneous result with the correct one. In-the-field detection of any failing behavior is therefore of prime interest for taking further action, such as discontinuing operation or triggering an alarm. In addition, today’s smart cards use 90nm technology and according to the various suppliers of chip, 65nm technology will be effective on the horizon 2013-2014. Since the energy required to force a transistor to switch is reduced for these new technologies, next-generation secure systems will become even more sensitive to various classes of fault attacks.Based on these considerations, within the group I work with, we have proposed new methods, architectures and tools to solve the following problems:• Test of secure devices: unfortunately, classical techniques for digital circuit testing cannot be easily used in this context. Indeed, classical testing solutions are based on the use of Design-For-Testability techniques that add hardware components to the circuit, aiming to provide full controllability and observability of internal states. Because crypto‐ processors and others cores in a secure system must pass through high‐quality test procedures to ensure that data are correctly processed, testing of crypto chips faces a dilemma. In fact design‐for‐testability schemes want to provide high controllability and observability of the device while security wants minimal controllability and observability in order to hide the secret. We have therefore proposed, form one side, the use of enhanced scan-based test techniques that exploit compaction schemes to reduce the observability of internal information while preserving the high level of testability. From the other side, we have proposed the use of Built-In Self-Test for such devices in order to avoid scan chain based test.• Reliability of secure devices: we proposed an on-line self-test architecture for hardware implementation of the Advanced Encryption Standard (AES). The solution exploits the inherent spatial replications of a parallel architecture for implementing functional redundancy at low cost.• Fault Attacks: one of the most powerful types of attack for secure devices is based on the intentional injection of faults (for instance by using a laser beam) into the system while an encryption occurs. By comparing the outputs of the circuits with and without the injection of the fault, it is possible to identify the secret key. To face this problem we have analyzed how to use error detection and correction codes as counter measure against this type of attack, and we have proposed a new code-based architecture. Moreover, we have proposed a bulk built-in current-sensor that allows detecting the presence of undesired current in the substrate of the CMOS device.• Fault simulation: to evaluate the effectiveness of countermeasures against fault attacks, we developed an open source fault simulator able to perform fault simulation for the most classical fault models as well as user-defined electrical level fault models, to accurately model the effect of laser injections on CMOS circuits.• Side-Channel attacks: they exploit physical data-related information leaking from the device (e.g. current consumption or electro-magnetic emission). One of the most intensively studied attacks is the Differential Power Analysis (DPA) that relies on the observation of the chip power fluctuations during data processing. I studied this type of attack in order to evaluate the influence of the countermeasures against fault attack on the power consumption of the device. Indeed, the introduction of countermeasures for one type of attack could lead to the insertion of some circuitry whose power consumption is related to the secret key, thus allowing another type of attack more easily. We have developed a flexible integrated simulation-based environment that allows validating a digital circuit when the device is attacked by means of this attack. All architectures we designed have been validated through this tool. Moreover, we developed a methodology that allows to drastically reduce the time required to validate countermeasures against this type of attack.TSV- based 3D Stacked Integrated Circuits TestThe stacking process of integrated circuits using TSVs (Through Silicon Via) is a promising technology that keeps the development of the integration more than Moore’s law, where TSVs enable to tightly integrate various dies in a 3D fashion. Nevertheless, 3D integrated circuits present many test challenges including the test at different levels of the 3D fabrication process: pre-, mid-, and post- bond tests. Pre-bond test targets the individual dies at wafer level, by testing not only classical logic (digital logic, IOs, RAM, etc) but also unbounded TSVs. Mid-bond test targets the test of partially assembled 3D stacks, whereas finally post-bond test targets the final circuit.The activities carried out within this topic cover 2 main issues:• Pre-bond test of TSVs: the electrical model of a TSV buried within the substrate of a CMOS circuit is a capacitance connected to ground (when the substrate is connected to ground). The main assumption is that a defect may affect the value of that capacitance. By measuring the variation of the capacitance’s value it is possible to check whether the TSV is correctly fabricated or not. We have proposed a method to measure the value of the capacitance based on the charge/ discharge delay of the RC network containing the TSV.• Test infrastructures for 3D stacked Integrated Circuits: testing a die before stacking to another die introduces the problem of a dynamic test infrastructure, where test data must be routed to a specific die based on the reached fabrication step. New solutions are proposed in literature that allow reconfiguring the test paths within the circuit, based on on-the-fly requirements. We have started working on an extension of the IEEE P1687 test standard that makes use of an automatic die-detection based on pull-up resistors.Memory and Microprocessor Test and ReliabilityThanks to device shrinking and miniaturization of fabrication technology, performances of microprocessors and of memories have grown of more than 5 magnitude order in the last 30 years. With this technology trend, it is necessary to face new problems and challenges, such as reliability, transient errors, variability and aging.In the last five years I’ve worked in cooperation with the Testgroup of Politecnico di Torino (Italy) to propose a new method to on-line validate the correctness of the program execution of a microprocessor. The main idea is to monitor a small set of control signals of the processors in order to identify incorrect activation sequences. This approach can detect both permanent and transient errors of the internal logic of the processor.Concerning the test of memories, we have proposed a new approach to automatically generate test programs starting from a functional description of the possible faults in the memory.Moreover, we proposed a new methodology, based on microprocessor error probability profiling, that aims at estimating fault injection results without the need of a typical fault injection setup. The proposed methodology is based on two main ideas: a one-time fault-injection analysis of the microprocessor architecture to characterize the probability of successful execution of each of its instructions in presence of a soft-error, and a static and very fast analysis of the control and data flow of the target software application to compute its probability of success

    Sensor de performance para células de memória CMOS

    Get PDF
    Vivemos hoje em dia tempos em que quase tudo tem um pequeno componente eletrónico e por sua vez esse componente precisa de uma memória para guardar as suas instruções. Dentro dos vários tipos de memórias, as Complementary Metal Oxide Semiconductor (CMOS) são as que mais utilização têm nos circuitos integrados e, com o avançar da tecnologia a ficar cada vez com uma escala mais reduzida, faz com que os problemas de performance e fiabilidade sejam uma constante. Efeitos como o BTI (Bias Thermal Instability), TDDB (Time Dependent Dielectric Breakdown), HCI (Hot Carrier Injection), EM (Electromigration), ao longo do tempo vão deteriorando os parâmetros físicos dos transístores de efeito de campo (MOSFET), mudando as suas propriedades elétricas. Associado ao efeito de BTI podemos ter o efeito PBTI (Positive BTI), que afeta mais os transístores NMOS, e o efeito NBTI (Negative BTI), que afeta mais os transístores PMOS. Se para nanotecnologias até 32 nanómetros o efeito NBTI é dominante, para tecnologias mais baixas os 2 efeitos são igualmente importantes. Porém, existem ainda outras variações no desempenho que podem colocar em causa o bom funcionamento dos circuitos, como as variações de processo (P), tensão (V) e temperatura (T), ou considerando todas estas variações, e de uma forma genérica, PVTA (Process, Voltage, Temperature and Aging). Tendo como base as células de memória de acesso aleatório (RAM, Random Access Memory), em particular as memórias estáticas (SRAM, Static Random Access Memory) e dinâmicas (DRAM, Dynamic Random Access Memory) que possuem tempos de leitura e escrita precisos, estas ficam bastante expostas ao envelhecimento dos seus componentes e, consecutivamente, acontece um decréscimo na sua performance, resultando em transições mais lentas, que por sua vez fará com que existam leituras e escritas mais lentas e poderão ocorrer erros nessas leituras e escritas . Para além destes fenómenos, temos também o facto de a margem de sinal ruido (SNM - Static Noise Margin) diminuir, fazendo com que a fiabilidade da memória seja colocada em causa. O envelhecimento das memórias CMOS traduz-se, portanto, na ocorrência de erros nas memórias ao longo do tempo, o que é indesejável, especialmente em sistemas críticos onde a ocorrência de um erro ou uma falha na memória pode significar por em risco sistemas de elevada importância e fundamentais (por exemplo, em sistemas de segurança, um erro pode desencadear um conjunto de ações não desejadas). Anteriormente já foram apresentadas algumas soluções para esta monitorização dos erros de uma memória, disponíveis na literatura, como é o caso do sensor de envelhecimento embebido no circuito OCAS (On-Chip Aging Sensor), que permite detetar envelhecimento numa SRAM provocado pelo envelhecimento por NBTI. Contudo este sensor demonstra algumas limitações, pois apenas se aplica a um conjunto de células SRAM conectadas a uma bit line, não sendo aplicado individualmente a outras células de memória como uma DRAM e não contemplando o efeito PBTI. Outra solução apresentada anteriormente é o Sensor de Envelhecimento para Células de Memória CMOS que demonstra alguma evolução em relação ao sensor OCAS. Contudo, ainda tem limitações, como é o caso de estar bastante dependente do sincronismo com a memória e não permitir qualquer tipo de calibração do sistema ao longo do seu funcionamento. O trabalho apresentado nesta dissertação resolve muitos dos problemas existentes nos trabalhos anteriores. Isto é, apresenta-se um sensor de performance para memórias capaz de reconhecer quando é que a memória pode estar na eminência de falhar, devido a fatores que afetam o desempenho da memória nas operações de escrita e leitura. Ou seja, sinaliza de forma preditiva as falhas. Este sensor está dividido em três grandes partes, como a seguir se descreve. O Transistion Detector é uma delas, que funciona como um “conversor” das transições na bit line da memória para o sensor, criando pulsos de duração proporcional à duração da transição na bit line, sendo que uma transição rápida resulta em pulsos curtos e uma transição lenta resulta em pulsos longos. Esta parte do circuito apresenta 2 tipos de configurações para o caso de ser aplicado numa SRAM, sendo que uma das configurações é para as memórias SRAM inicializadas a VDD, e a segunda configuração para memórias SRAM inicializadas a VDD/2. É também apresentada uma terceira configuração para o caso de o detetor ser aplicado numa DRAM. O funcionamento do detetor de transições está baseado num conjunto de inversores desequilibrados (ou seja, com capacidades de condução diferentes entre o transístor N e P no inversor), criando assim inversores do tipo N (com o transístor N mais condutivo que o P) e inversores do tipo P (com o transístor P mais condutivo que o N) que respondem de forma diferente às transições de 1 para 0 e vice-versa. Estas diferenças serão cruciais para a criação do pulso final que entrará no Pulse Detetor. Este segundo bloco do sensor é responsável por carregar um condensador com uma tensão proporcional ao tempo que a bit line levou a transitar. É nesta parte que se apresenta uma caraterística nova e importante, quando comparado com as soluções já existentes, que é a capacidade do sensor poder ser calibrado. Para isso, é utilizado um conjunto de transístores para carregar o condensador durante o impulso gerado no detetor de transições, que permitem aumentar ou diminuir a resistência de carga do condensador, ficando este com mais ou menos tensão (a tensão proporcional ao tempo da transição da bit line) a ser usada na Comparação seguinte. O terceiro grande bloco deste sensor é resumidamente um bloco comparador, que compara a tensão guardada no condensador com uma tensão de referência disponível no sensor e definida durante o projeto. Este comparador tem a função de identificar qual destas 2 tensões é a mais alta (a do condensador, que é proporcional ao tempo de transição da bit line, ou a tensão de referência) e fazer com a mesma seja “disparada” para VDD, sendo que a tensão mais baixa será colocada a VSS. Desta forma é sinalizado se a transição que está a ser avaliada deve ser considerada um erro ou não. Para controlar todo o processo, o sensor tem na sua base de funcionamento um controlador (uma máquina de estados finita composta por 3 estados). O primeiro estado do controlador é o estado de Reset, que faz com que todos os pontos do circuito estejam com as tenções necessárias ao início de funcionamento do mesmo. O segundo estado é o Sample, que fica a aguardar uma transição na bit line para ser validada pelo sensor e fazer com que o mesmo avance para o terceiro estado, que é o de Compare, onde ativa o comparador do sensor e coloca no exterior o resultado dessa comparação. Assim, se for detetado uma transição demasiado lenta na bit line, que é um sinal de erro, o mesmo será sinalizado para o exterior activando o sinal de saída. Caso o sensor não detete nenhum erro nas transições, o sinal de saída não é activado. O sensor tem a capacidade de funcionar em modo on-line, ou seja, não é preciso desligar o circuito de memória do seu funcionamento normal para poder ser testado. Para além disso, pode ainda ser utilizado internamente na memória, como sensor local (monitorizando as células reais de memória), ou externamente, como sensor global, caso seja colocado a monitorizar uma célula de memória fictícia.Within the several types of memories, the Complementary Metal Oxide Semiconductor (CMOS) are the most used in the integrated circuits and, as technology advances and becomes increasingly smaller in scale, it makes performance and reliability a constant problem. Effects such as BTI (Bias Thermal Instability), the positive (PBTI - Positive BTI) and the negative (NBTI - Negative BTI), TDDB (Time Dependent Dielectric Breakdown), HCI (Hot Carrier Injection), EM (Electromigration), etc., are aging effects that contribute to a cumulatively degradation of the transistors. Moreover, other parametric variations may also jeopardize the proper functioning of circuits and contribute to reduce circuits’ performance, such as process variations (P), power-supply voltage variations (V) and temperature variations (T), or considering all these variations, and in a generic way, PVTA (Process, Voltage, Temperature and Aging). The Sensor proposed in this paper aims to signalize these problems so that the user knows when the memory operation may be compromised. The sensor is made up of three important parts, the Transition Detector, the Pulse Detector and the Comparator, creating a sensor that converts bit line transition created in a memory operation (read or write) into a pulse and a voltage, that can be compared with a reference voltage available in the sensor. If the reference voltage is higher than the voltage proportional to the bit line transition time, the sensor output is not activated; but if the bit line transition time is high enough to generate a voltage higher than the reference voltage in the sensor, the sensor output signalizes a predictive error, denoting that the memory performance is in a critical state that may lead to an error if corrective measures are not taken. One important feature in this sensor topology is that it can be calibrated during operation, by controlling sensor’s sensibility to the bit line transition. Another important feature is that it can be applied locally, to monitor the online operation of the memory, or globally, by monitoring a dummy memory in pre-defined conditions. Moreover, it can be applied to SRAM or DRAM, being the first online sensor available for DRAM memories

    Radiation Tolerant Electronics, Volume II

    Get PDF
    Research on radiation tolerant electronics has increased rapidly over the last few years, resulting in many interesting approaches to model radiation effects and design radiation hardened integrated circuits and embedded systems. This research is strongly driven by the growing need for radiation hardened electronics for space applications, high-energy physics experiments such as those on the large hadron collider at CERN, and many terrestrial nuclear applications, including nuclear energy and safety management. With the progressive scaling of integrated circuit technologies and the growing complexity of electronic systems, their ionizing radiation susceptibility has raised many exciting challenges, which are expected to drive research in the coming decade.After the success of the first Special Issue on Radiation Tolerant Electronics, the current Special Issue features thirteen articles highlighting recent breakthroughs in radiation tolerant integrated circuit design, fault tolerance in FPGAs, radiation effects in semiconductor materials and advanced IC technologies and modelling of radiation effects

    Design and Implementation of Low Power SRAM Using Highly Effective Lever Shifters

    Get PDF
    The explosive growth of battery-operated devices has made low-power design a priority in recent years. In high-performance Systems-on-Chip, leakage power consumption has become comparable to the dynamic component, and its relevance increases as technology scales. These trends are even more evident for SRAM memory devices since they are a dominant source of standby power consumption in low-power application processors. The on-die SRAM power consumption is particularly important for increasingly pervasive mobile and handheld applications where battery life is a key design and technology attribute. In the SRAM-memory design, SRAM cells also comprise the most significant portion of the total chip. Moreover, the increasing number of transistors in the SRAM memories and the MOSs\u27 increasing leakage current in the scaled technologies have turned the SRAM unit into a power-hungry block for both dynamic and static viewpoints. Although the scaling of the supply voltage enables low-power consumption, the SRAM cells\u27 data stability becomes a major concern. Thus, the reduction of SRAM leakage power has become a critical research concern. To address the leakage power consumption in high-performance cache memories, a stream of novel integrated circuit and architectural level techniques are proposed by researchers including leakage-current management techniques, cell array leakage reduction techniques, bitline leakage reduction techniques, and leakage current compensation techniques. The main goal of this work was to improve the cell array leakage reduction techniques in order to minimize the leakage power for SRAM memory design in low-power applications. This study performs the body biasing application to reduce leakage current as well. To adjust the NMOSs\u27 threshold voltage and consequently leakage current, a negative DC voltage could be applied to their body terminal as a second gate. As a result, in order to generate a negative DC voltage, this study proposes a negative voltage reference that includes a trimming circuit and a negative level shifter. These enhancements are employed to a 10kb SRAM memory operating at 0.3V in a 65nm CMOS process

    Low power architectures for streaming applications

    Get PDF

    Solid State Circuits Technologies

    Get PDF
    The evolution of solid-state circuit technology has a long history within a relatively short period of time. This technology has lead to the modern information society that connects us and tools, a large market, and many types of products and applications. The solid-state circuit technology continuously evolves via breakthroughs and improvements every year. This book is devoted to review and present novel approaches for some of the main issues involved in this exciting and vigorous technology. The book is composed of 22 chapters, written by authors coming from 30 different institutions located in 12 different countries throughout the Americas, Asia and Europe. Thus, reflecting the wide international contribution to the book. The broad range of subjects presented in the book offers a general overview of the main issues in modern solid-state circuit technology. Furthermore, the book offers an in depth analysis on specific subjects for specialists. We believe the book is of great scientific and educational value for many readers. I am profoundly indebted to the support provided by all of those involved in the work. First and foremost I would like to acknowledge and thank the authors who worked hard and generously agreed to share their results and knowledge. Second I would like to express my gratitude to the Intech team that invited me to edit the book and give me their full support and a fruitful experience while working together to combine this book
    corecore