52 research outputs found

    Addressing Manufacturing Challenges in NoC-based ULSI Designs

    Full text link
    Hernández Luz, C. (2012). Addressing Manufacturing Challenges in NoC-based ULSI Designs [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1669

    Error Detection and Diagnosis for System-on-Chip in Space Applications

    Get PDF
    Tesis por compendio de publicacionesLos componentes electrónicos comerciales, comúnmente llamados componentes Commercial-Off-The-Shelf (COTS) están presentes en multitud de dispositivos habituales en nuestro día a día. Particularmente, el uso de microprocesadores y sistemas en chip (SoC) altamente integrados ha favorecido la aparición de dispositivos electrónicos cada vez más inteligentes que sostienen el estilo de vida y el avance de la sociedad moderna. Su uso se ha generalizado incluso en aquellos sistemas que se consideran críticos para la seguridad, como vehículos, aviones, armamento, dispositivos médicos, implantes o centrales eléctricas. En cualquiera de ellos, un fallo podría tener graves consecuencias humanas o económicas. Sin embargo, todos los sistemas electrónicos conviven constantemente con factores internos y externos que pueden provocar fallos en su funcionamiento. La capacidad de un sistema para funcionar correctamente en presencia de fallos se denomina tolerancia a fallos, y es un requisito en el diseño y operación de sistemas críticos. Los vehículos espaciales como satélites o naves espaciales también hacen uso de microprocesadores para operar de forma autónoma o semi autónoma durante su vida útil, con la dificultad añadida de que no pueden ser reparados en órbita, por lo que se consideran sistemas críticos. Además, las duras condiciones existentes en el espacio, y en particular los efectos de la radiación, suponen un gran desafío para el correcto funcionamiento de los dispositivos electrónicos. Concretamente, los fallos transitorios provocados por radiación (conocidos como soft errors) tienen el potencial de ser una de las mayores amenazas para la fiabilidad de un sistema en el espacio. Las misiones espaciales de gran envergadura, típicamente financiadas públicamente como en el caso de la NASA o la Agencia Espacial Europea (ESA), han tenido históricamente como requisito evitar el riesgo a toda costa por encima de cualquier restricción de coste o plazo. Por ello, la selección de componentes resistentes a la radiación (rad-hard) específicamente diseñados para su uso en el espacio ha sido la metodología imperante en el paradigma que hoy podemos denominar industria espacial tradicional, u Old Space. Sin embargo, los componentes rad-hard tienen habitualmente un coste mucho más alto y unas prestaciones mucho menores que otros componentes COTS equivalentes. De hecho, los componentes COTS ya han sido utilizados satisfactoriamente en misiones de la NASA o la ESA cuando las prestaciones requeridas por la misión no podían ser cubiertas por ningún componente rad-hard existente. En los últimos años, el acceso al espacio se está facilitando debido en gran parte a la entrada de empresas privadas en la industria espacial. Estas empresas no siempre buscan evitar el riesgo a toda costa, sino que deben perseguir una rentabilidad económica, por lo que hacen un balance entre riesgo, coste y plazo mediante gestión del riesgo en un paradigma denominado Nuevo Espacio o New Space. Estas empresas a menudo están interesadas en entregar servicios basados en el espacio con las máximas prestaciones y el mayor beneficio posibles, para lo cual los componentes rad-hard son menos atractivos debido a su mayor coste y menores prestaciones que los componentes COTS existentes. Sin embargo, los componentes COTS no han sido específicamente diseñados para su uso en el espacio y típicamente no incluyen técnicas específicas para evitar que los efectos de la radiación afecten su funcionamiento. Los componentes COTS se comercializan tal cual son, y habitualmente no es posible modificarlos para mejorar su resistencia a la radiación. Además, los elevados niveles de integración de los sistemas en chip (SoC) complejos de altas prestaciones dificultan su observación y la aplicación de técnicas de tolerancia a fallos. Este problema es especialmente relevante en el caso de los microprocesadores. Por tanto, existe un gran interés en el desarrollo de técnicas que permitan conocer y mejorar el comportamiento de los microprocesadores COTS bajo radiación sin modificar su arquitectura y sin interferir en su funcionamiento para facilitar su uso en el espacio y con ello maximizar las prestaciones de las misiones espaciales presentes y futuras. En esta Tesis se han desarrollado técnicas novedosas para detectar, diagnosticar y mitigar los errores producidos por radiación en microprocesadores y sistemas en chip (SoC) comerciales, utilizando la interfaz de traza como punto de observación. La interfaz de traza es un recurso habitual en los microprocesadores modernos, principalmente enfocado a soportar las tareas de desarrollo y depuración del software durante la fase de diseño. Sin embargo, una vez el desarrollo ha concluido, la interfaz de traza típicamente no se utiliza durante la fase operativa del sistema, por lo que puede ser reutilizada sin coste. La interfaz de traza constituye un punto de conexión viable para observar el comportamiento de un microprocesador de forma no intrusiva y sin interferir en su funcionamiento. Como resultado de esta Tesis se ha desarrollado un módulo IP capaz de recabar y decodificar la información de traza de un microprocesador COTS moderno de altas prestaciones. El IP es altamente configurable y personalizable para adaptarse a diferentes aplicaciones y tipos de procesadores. Ha sido diseñado y validado utilizando el dispositivo Zynq-7000 de Xilinx como plataforma de desarrollo, que constituye un dispositivo COTS de interés en la industria espacial. Este dispositivo incluye un procesador ARM Cortex-A9 de doble núcleo, que es representativo del conjunto de microprocesadores hard-core modernos de altas prestaciones. El IP resultante es compatible con la tecnología ARM CoreSight, que proporciona acceso a información de traza en los microprocesadores ARM. El IP incorpora técnicas para detectar errores en el flujo de ejecución y en los datos de la aplicación ejecutada utilizando la información de traza, en tiempo real y con muy baja latencia. El IP se ha validado en campañas de inyección de fallos y también en radiación con protones y neutrones en instalaciones especializadas. También se ha combinado con otras técnicas de tolerancia a fallos para construir técnicas híbridas de mitigación de errores. Los resultados experimentales obtenidos demuestran su alta capacidad de detección y potencialidad en el diagnóstico de errores producidos por radiación. El resultado de esta Tesis, desarrollada en el marco de un Doctorado Industrial entre la Universidad Carlos III de Madrid (UC3M) y la empresa Arquimea, se ha transferido satisfactoriamente al entorno empresarial en forma de un proyecto financiado por la Agencia Espacial Europea para continuar su desarrollo y posterior explotación.Commercial electronic components, also known as Commercial-Off-The-Shelf (COTS), are present in a wide variety of devices commonly used in our daily life. Particularly, the use of microprocessors and highly integrated System-on-Chip (SoC) devices has fostered the advent of increasingly intelligent electronic devices which sustain the lifestyles and the progress of modern society. Microprocessors are present even in safety-critical systems, such as vehicles, planes, weapons, medical devices, implants, or power plants. In any of these cases, a fault could involve severe human or economic consequences. However, every electronic system deals continuously with internal and external factors that could provoke faults in its operation. The capacity of a system to operate correctly in presence of faults is known as fault-tolerance, and it becomes a requirement in the design and operation of critical systems. Space vehicles such as satellites or spacecraft also incorporate microprocessors to operate autonomously or semi-autonomously during their service life, with the additional difficulty that they cannot be repaired once in-orbit, so they are considered critical systems. In addition, the harsh conditions in space, and specifically radiation effects, involve a big challenge for the correct operation of electronic devices. In particular, radiation-induced soft errors have the potential to become one of the major risks for the reliability of systems in space. Large space missions, typically publicly funded as in the case of NASA or European Space Agency (ESA), have followed historically the requirement to avoid the risk at any expense, regardless of any cost or schedule restriction. Because of that, the selection of radiation-resistant components (known as rad-hard) specifically designed to be used in space has been the dominant methodology in the paradigm of traditional space industry, also known as “Old Space”. However, rad-hard components have commonly a much higher associated cost and much lower performance that other equivalent COTS devices. In fact, COTS components have already been used successfully by NASA and ESA in missions that requested such high performance that could not be satisfied by any available rad-hard component. In the recent years, the access to space is being facilitated in part due to the irruption of private companies in the space industry. Such companies do not always seek to avoid the risk at any cost, but they must pursue profitability, so they perform a trade-off between risk, cost, and schedule through risk management in a paradigm known as “New Space”. Private companies are often interested in deliver space-based services with the maximum performance and maximum benefit as possible. With such objective, rad-hard components are less attractive than COTS due to their higher cost and lower performance. However, COTS components have not been specifically designed to be used in space and typically they do not include specific techniques to avoid or mitigate the radiation effects in their operation. COTS components are commercialized “as is”, so it is not possible to modify them to improve their susceptibility to radiation effects. Moreover, the high levels of integration of complex, high-performance SoC devices hinder their observability and the application of fault-tolerance techniques. This problem is especially relevant in the case of microprocessors. Thus, there is a growing interest in the development of techniques allowing to understand and improve the behavior of COTS microprocessors under radiation without modifying their architecture and without interfering with their operation. Such techniques may facilitate the use of COTS components in space and maximize the performance of present and future space missions. In this Thesis, novel techniques have been developed to detect, diagnose, and mitigate radiation-induced errors in COTS microprocessors and SoCs using the trace interface as an observation point. The trace interface is a resource commonly found in modern microprocessors, mainly intended to support software development and debugging activities during the design phase. However, it is commonly left unused during the operational phase of the system, so it can be reused with no cost. The trace interface constitutes a feasible connection point to observe microprocessor behavior in a non-intrusive manner and without disturbing processor operation. As a result of this Thesis, an IP module has been developed capable to gather and decode the trace information of a modern, high-end, COTS microprocessor. The IP is highly configurable and customizable to support different applications and processor types. The IP has been designed and validated using the Xilinx Zynq-7000 device as a development platform, which is an interesting COTS device for the space industry. This device features a dual-core ARM Cortex-A9 processor, which is a good representative of modern, high-end, hard-core microprocessors. The resulting IP is compatible with the ARM CoreSight technology, which enables access to trace information in ARM microprocessors. The IP is able to detect errors in the execution flow of the microprocessor and in the application data using trace information, in real time and with very low latency. The IP has been validated in fault injection campaigns and also under proton and neutron irradiation campaigns in specialized facilities. It has also been combined with other fault-tolerance techniques to build hybrid error mitigation approaches. Experimental results demonstrate its high detection capabilities and high potential for the diagnosis of radiation-induced errors. The result of this Thesis, developed in the framework of an Industrial Ph.D. between the University Carlos III of Madrid (UC3M) and the company Arquimea, has been successfully transferred to the company business as a project sponsored by European Space Agency to continue its development and subsequent commercialization.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidenta: María Luisa López Vallejo.- Secretario: Enrique San Millán Heredia.- Vocal: Luigi Di Lill

    Reliable chip design from low powered unreliable components

    Get PDF
    The pace of technological improvement of the semiconductor market is driven by Moore’s Law, enabling chip transistor density to double every two years. The transistors would continue to decline in cost and size but increase in power. The continuous transistor scaling and extremely lower power constraints in modern Very Large Scale Integrated(VLSI) chips can potentially supersede the benefits of the technology shrinking due to reliability issues. As VLSI technology scales into nanoscale regime, fundamental physical limits are approached, and higher levels of variability, performance degradation, and higher rates of manufacturing defects are experienced. Soft errors, which traditionally affected only the memories, are now also resulting in logic circuit reliability degradation. A solution to these limitations is to integrate reliability assessment techniques into the Integrated Circuit(IC) design flow. This thesis investigates four aspects of reliability driven circuit design: a)Reliability estimation; b) Reliability optimization; c) Fault-tolerant techniques, and d) Delay degradation analysis. To guide the reliability driven synthesis and optimization of combinational circuits, highly accurate probability based reliability estimation methodology christened Conditional Probabilistic Error Propagation(CPEP) algorithm is developed to compute the impact of gate failures on the circuit output. CPEP guides the proposed rewriting based logic optimization algorithm employing local transformations. The main idea behind this methodology is to replace parts of the circuit with functionally equivalent but more reliable counterparts chosen from a precomputed subset of Negation-Permutation-Negation(NPN) classes of 4-variable functions. Cut enumeration and Boolean matching driven by reliability-aware optimization algorithm are used to identify the best possible replacement candidates. Experiments on a set of MCNC benchmark circuits and 8051 functional microcontroller units indicate that the proposed framework can achieve up to 75% reduction of output error probability. On average, about 14% SER reduction is obtained at the expense of very low area overhead of 6.57% that results in 13.52% higher power consumption. The next contribution of the research describes a novel methodology to design fault tolerant circuitry by employing the error correction codes known as Codeword Prediction Encoder(CPE). Traditional fault tolerant techniques analyze the circuit reliability issue from a static point of view neglecting the dynamic errors. In the context of communication and storage, the study of novel methods for reliable data transmission under unreliable hardware is an increasing priority. The idea of CPE is adapted from the field of forward error correction for telecommunications focusing on both encoding aspects and error correction capabilities. The proposed Augmented Encoding solution consists of computing an augmented codeword that contains both the codeword to be transmitted on the channel and extra parity bits. A Computer Aided Development(CAD) framework known as CPE simulator is developed providing a unified platform that comprises a novel encoder and fault tolerant LDPC decoders. Experiments on a set of encoders with different coding rates and different decoders indicate that the proposed framework can correct all errors under specific scenarios. On average, about 1000 times improvement in Soft Error Rate(SER) reduction is achieved. Last part of the research is the Inverse Gaussian Distribution(IGD) based delay model applicable to both combinational and sequential elements for sub-powered circuits. The Probability Density Function(PDF) based delay model accurately captures the delay behavior of all the basic gates in the library database. The IGD model employs these necessary parameters, and the delay estimation accuracy is demonstrated by evaluating multiple circuits. Experiments results indicate that the IGD based approach provides a high matching against HSPICE Monte Carlo simulation results, with an average error less than 1.9% and 1.2% for the 8-bit Ripple Carry Adder(RCA), and 8-bit De-Multiplexer(DEMUX) and Multiplexer(MUX) respectively

    Cross layer reliability estimation for digital systems

    Get PDF
    Forthcoming manufacturing technologies hold the promise to increase multifuctional computing systems performance and functionality thanks to a remarkable growth of the device integration density. Despite the benefits introduced by this technology improvements, reliability is becoming a key challenge for the semiconductor industry. With transistor size reaching the atomic dimensions, vulnerability to unavoidable fluctuations in the manufacturing process and environmental stress rise dramatically. Failing to meet a reliability requirement may add excessive re-design cost to recover and may have severe consequences on the success of a product. %Worst-case design with large margins to guarantee reliable operation has been employed for long time. However, it is reaching a limit that makes it economically unsustainable due to its performance, area, and power cost. One of the open challenges for future technologies is building ``dependable'' systems on top of unreliable components, which will degrade and even fail during normal lifetime of the chip. Conventional design techniques are highly inefficient. They expend significant amount of energy to tolerate the device unpredictability by adding safety margins to a circuit's operating voltage, clock frequency or charge stored per bit. Unfortunately, the additional cost introduced to compensate unreliability are rapidly becoming unacceptable in today's environment where power consumption is often the limiting factor for integrated circuit performance, and energy efficiency is a top concern. Attention should be payed to tailor techniques to improve the reliability of a system on the basis of its requirements, ending up with cost-effective solutions favoring the success of the product on the market. Cross-layer reliability is one of the most promising approaches to achieve this goal. Cross-layer reliability techniques take into account the interactions between the layers composing a complex system (i.e., technology, hardware and software layers) to implement efficient cross-layer fault mitigation mechanisms. Fault tolerance mechanism are carefully implemented at different layers starting from the technology up to the software layer to carefully optimize the system by exploiting the inner capability of each layer to mask lower level faults. For this purpose, cross-layer reliability design techniques need to be complemented with cross-layer reliability evaluation tools, able to precisely assess the reliability level of a selected design early in the design cycle. Accurate and early reliability estimates would enable the exploration of the system design space and the optimization of multiple constraints such as performance, power consumption, cost and reliability. This Ph.D. thesis is devoted to the development of new methodologies and tools to evaluate and optimize the reliability of complex digital systems during the early design stages. More specifically, techniques addressing hardware accelerators (i.e., FPGAs and GPUs), microprocessors and full systems are discussed. All developed methodologies are presented in conjunction with their application to real-world use cases belonging to different computational domains

    Cross-layer Soft Error Analysis and Mitigation at Nanoscale Technologies

    Get PDF
    This thesis addresses the challenge of soft error modeling and mitigation in nansoscale technology nodes and pushes the state-of-the-art forward by proposing novel modeling, analyze and mitigation techniques. The proposed soft error sensitivity analysis platform accurately models both error generation and propagation starting from a technology dependent device level simulations all the way to workload dependent application level analysis

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems

    Strain-Engineered MOSFETs

    Get PDF
    This book brings together new developments in the area of strain-engineered MOSFETs using high-mibility substrates such as SIGe, strained-Si, germanium-on-insulator and III-V semiconductors into a single text which will cover the materials aspects, principles, and design of advanced devices, their fabrication and applications. The book presents a full TCAD methodology for strain-engineering in Si CMOS technology involving data flow from process simulation to systematic process variability simulation and generation of SPICE process compact models for manufacturing for yield optimization

    Advances in Evolutionary Algorithms

    Get PDF
    With the recent trends towards massive data sets and significant computational power, combined with evolutionary algorithmic advances evolutionary computation is becoming much more relevant to practice. Aim of the book is to present recent improvements, innovative ideas and concepts in a part of a huge EA field

    Certification of many-body bosonic interference in 3D photonic chips

    Get PDF
    Quantum information and quantum optics have reached several milestones during the last two decades. Starting from the 1980s, when Feynman and laid the foundations of quantum computation and information, in the last years there have been significant progresses both in theoretical and experimental aspects. A series of quantum algorithms has been proposed that promise computational speed-up with respect to its classical counterpart. If fully exploited, quantum computers are expected to be able to markedly outperform classical ones in several specific tasks. More generally, quantum computers would change the paradigm of what we currently consider efficiently computable, being based on a completely different way to encode and elaborate data, which relies on the unique properties of quantum mechanics such as linear superposition and entanglement. The building block of quantum computation is the qubit, which incorporates in its definition the revolutionary aspects that would enable overcoming classical computation in terms of efficiency and security. However recent developments in technologies claimed the realizations of devices with hundreds of controllable qubits, provoking an important debate of what exactly is a quantum computing process and how to unambiguously recognize the presence of a quantum speed-up. Nevertheless, the question of what exactly makes a quantum computer faster than a classical one has currently no clear answer. Its applications could spread from cryptography, with a significant enhancement in terms of security, to communication and simulation of quantum systems. In particular, in the latter case it was shown by Feynman that some problems in quantum mechanics are intractable by means of only classical approaches, due to the exponential increase in the dimension of the Hilbert space. Clearly the question of where quantum capabilities in computation are significant is still open and the hindrance to answer to these problems brought the scientific community to focus its efforts in trying to develop these kind of systems. As a consequence, significant progresses have been made in trapped ions, superconducting circuits, neutral atoms and linear optics permitting the first implementations of such devices. Among all the scheme introduced, the approach suggested by linear optics, uses photons to encode information and is believed to be promising in most tasks. For instance, photons are important for quantum communication and cryptography protocols because of their natural tendency to behave as "flying" qubits. Moreover, with identical properties (energy, polarization, spatial and temporal profiles), indistinguishable photons can interfere with each other due to their boson nature. These features have a direct application in the task of performing quantum protocols. In fact they are suitable for several recent scheme such as for example graph- and cluster-state photonic quantum computation . In particular, it has been proved that universal quantum computation is possible using only simple optical elements, single photon sources, number resolving photo-detectors and adaptative measurements. thus confirming the pivotal importance of these particles. Although the importance of linear optics has been confirmed in the last decades, its potentialities were already anticipated years before when (1) Burnham et al. discovered the Spontaneous Parametric Down-Conversion, (2) Hong, Ou and Mandel discovered the namesake effect (HOM) and (3) Reck et al. showed how a particular combination of simple optical elements can reproduce any unitary transformation. (1) SPDC consists in the generation of entangled photon pairs through a nonlinear crystal pumped with a strong laser and despite recent advancements in other approaches, it has been the keystone of single photon generation for several years , due to the possibility to create entangled photon pairs with high spectral correlation. (2) The HOM effect demonstrated the tendency of indistinguishable photon pairs to "bunch" in the same output port of a balanced beam splitter, de-facto showing a signature of quantum interference. Finally, (3) the capability to realize any unitary operation in the space of the occupation modes led to the identification of interferometers as pivotal objects for quantum information protocols with linear optics. At this point, once recognized the importance of all these ingredients, linear optics aimed to reach large implementations to perform protocols with a concrete quantum advantage. Unfortunately, the methods exploited by bulk optics suffer of strong mechanical instabilities, which prevent a transition to large-size experiments. The need for both stability and scalability has led to the miniaturization of such bulk optical devices. Several techniques have been employed to reach this goal, such as lithographic processes and implementations on silica materials. All these approaches are significant in terms of stability and ease of manipulation, but they are still expensive in terms of costs and fabrication time and, moreover, they do not permit to exploit the 3D dimension to realize more complex platforms. A powerful approach to transfer linear optical elements on an integrated photonic platform able to overcome these limitations has been recognized in the femtosecond laser micromachining. FLM, developed in the last two decades, exploits the mechanism of non-linear absorption in a medium with focused femtosecond pulses to design arbitrary 3D structures inside an optical substrate. Miniaturized beam splitters and phase shifters are then realized inducing a localized change in the refractive index of the medium. This technique allows to write complex 3D circuits by moving the sample along the desired path at constant velocity, perpendicularly with respect to the laser beam. 3D structures can also be realized either polarization sensitive or insensitive, due to the low birefringence of the material used (borosilicate glass), enabling polarization-encoded qubits and polarization-entangled photons to realize protocol of quantum computation \cite{linda1,linda2}. As a consequence, integrated photonics gives us a starting point to implement quantum simulation processes in a very stable configuration. This feature could pave the way to investigate larger size experiments, where a higher number of photons and optical elements are involved. Recently, it has been suggested that many-particle bosonic interference can be used as a testing tool for the computational power of quantum computers and quantum simulators. Despite the important constraints that we need to satisfy to build a universal quantum computerand perform quantum computation in linear optics, bosonic statistics finds a new promising simpler application in pinpointing the ingredients for a quantum advantage. In this context, an interesting model was recently introduced: the Boson Sampling problem. This model exploits the evolution of indistinguishable bosons into an optical interferometer described by an unitary transformation and it consists in sampling from its output distribution. The core behind this model is the many-body boson interference: although measuring the outcomes seems to be easy to perform, simulating the output of this device, is believed to be intrinsically hard classically in terms of physical resources and time, even approximatively. For this reason Boson Sampling captured the interest of the optical community, which concentrated its efforts to realize experimentally this kind of platforms. This phenomenon can be interpreted as a generalization of the Hong-Ou-Mandel effect of a nn-photon state that interferes into an mm-mode interferometer. In principle, if we are able to reach large dimensions (in n and m), this method can provide the first evidence of quantum over classical advantage and, moreover, it could open the way to the implementation of quantum computation based on quantum interference. Although the path seems promising, this approach has non-trivial drawbacks. First, (a) we need to reach large scale implementations in order to observe quantum advantage, so how can we scale them up? There are two roads that we can follow: (a1) to scale with the number of modes with the techniques developed in integrated photonics, trying to find the best implementation for our interferometers in terms of robustness against losses and choosing the best implementation, or (a2) to scale up the number of photons, identifying appropriate sources for this task. Second, (b) in order to perform quantum protocols we should "trust" on the effective true interference that is supposed to occur the protagonist of the phenomenon. For large-scale implementations, simulating the physical behaviour by means of classical approaches, becomes quickly intractable. In this case the road that we chose is (1) to identify the transformation that are optimal in discriminating true photon interference and (2) to use classification protocols as machine learning techniques and statistical tools to extract information and correlations from output data. Following these premises, the main goal of this thesis is to address a solution to these problems by following the suggested paths. Firstly, we will give an overview of the theoretical and experimental tools used and, secondly, we will show the subsequent analyses that we have carried out. Regarding point \textbf{(a1)} we performed several analyses under broad and realistic conditions. We studied quantitatively the difference between the three known architectures to identify which scheme is more appropriate for the realization of unitary transformations in our interferometers, in terms of scalability and robustness to losses and noise. We also studied the problem comparing our results to the recent developments in integrated photonics. Regarding point (a2) we studied different experimental realizations which seem promising for scaling up both the number of photons and the performances of the quantum device. First, we used multiple SPDC sources to improve the generation rate of single photons. Second, we performed an analysis on the performances of on-demand single-photon sources using a 3-mode integrated photonic circuit and quantum dots as deterministic single photon sources. This investigation has been carried out in a collaboration with the Optic of Semiconductor nanoStructures Group (GOSS) led by Prof. Pascale Senellart in Laboratoire de Photonique et de Nanostructures (C2N). Finally, we focused on problem \textbf{(b)} trying to answer the question of how to validate genuine multi-photon interference in an efficient way. Using optical chips built with FLM we performed several experiments based on protocols suitable for the problem. We performed an analysis on finding the optimal transformations for identifying genuine quantum interference. For this scope, we employed different figures of merit as Total Variation Distance (TVD) and Bayesian tests to exclude alternative hyphotheses on the experimental data. The result of these analysis is the identification of two unitaries which belong to the class of Hadamard matrices, namely the Fourier and Sylvester transformations. Thanks to the unique properties associated to the symmetries of these unitaries, we are able to formalize rules to identify real photon interference, the so-called zero-transmission laws, by looking at specific outputs of the interferometers which are efficiently predictable. Subsequently, we will further investigate on the validation problem by looking at the target from a different perspective. We will exploit two roads: retrieving signatures of quantum interference through machine learning classification techniques and extracting information from the experimental data by means of statistical tools. These approaches are based on choosing training samples from data which are used as reference in order to classify the whole set of output data accordingly, in this case, to its physical behaviour. In this way we are able to rule out against alternative hypotheses not based on true quantum interference
    corecore