384 research outputs found

    All-Silicon-Based Photonic Quantum Random Number Generators

    Get PDF
    Random numbers are fundamental elements in different fields of science and technology such as computer simulation like Monte Carlo-method simulation, statistical sampling, cryptography, games and gambling, and other areas where unpredictable results are necessary. Random number generators (RNG) are generally classified as “pseudo”-random number generators (PRNG) and "truly" random number generators (TRNG). Pseudo random numbers are generated by computer algorithms with a (random) seed and a specific formula. The random numbers produced in this way (with a small degree of unpredictability) are good enough for some applications such as computer simulation. However, for some other applications like cryptography they are not completely reliable. When the seed is revealed, the entire sequence of numbers can be produced. The periodicity is also an undesirable property of PRNGs that can be disregarded for most practical purposes if the sequence recurs after a very long period. However, the predictability still remains a tremendous disadvantage of this type of generators. Truly random numbers, on the other hand, can be generated through physical sources of randomness like flipping a coin. However, the approaches exploiting classical motion and classical physics to generate random numbers possess a deterministic nature that is transferred to the generated random numbers. The best solution is to benefit from the assets of indeterminacy and randomness in quantum physics. Based on the quantum theory, the properties of a particle cannot be determined with arbitrary precision until a measurement is carried out. The result of a measurement, therefore, remains unpredictable and random. Optical phenomena including photons as the quanta of light have various random, non-deterministic properties. These properties include the polarization of the photons, the exact number of photons impinging a detector and the photon arrival times. Such intrinsically random properties can be exploited to generate truly random numbers. Silicon (Si) is considered as an interesting material in integrated optics. Microelectronic chips made from Si are cheap and easy to mass-fabricate, and can be densely integrated. Si integrated optical chips, that can generate, modulate, process and detect light signals, exploit the benefits of Si while also being fully compatible with electronic. Since many electronic components can be integrated into a single chip, Si is an ideal candidate for the production of small, powerful devices. By complementary metal-oxide-semiconductor (CMOS) technology, the fabrication of compact and mass manufacturable devices with integrated components on the Si platform is achievable. In this thesis we aim to model, study and fabricate a compact photonic quantum random number generator (QRNG) on the Si platform that is able to generate high quality, "truly" random numbers. The proposed QRNG is based on a Si light source (LED) coupled with a Si single photon avalanche diode (SPAD) or an array of SPADs which is called Si photomultiplier (SiPM). Various implementations of QRNG have been developed reaching an ultimate geometry where both the source and the SPAD are integrated on the same chip and fabricated by the same process. This activity was performed within the project SiQuro—on Si chip quantum optics for quantum computing and secure communications—which aims to bring the quantum world into integrated photonics. By using the same successful paradigm of microelectronics—the study and design of very small electronic devices typically made from semiconductor materials—, the vision is to have low cost and mass manufacturable integrated quantum photonic circuits for a variety of different applications in quantum computing, measure, sensing, secure communications and services. The Si platform permits, in a natural way, the integration of quantum photonics with electronics. Two methodologies are presented to generate random numbers: one is based on photon counting measurements and another one is based on photon arrival time measurements. The latter is robust, masks all the drawbacks of afterpulsing, dead time and jitter of the Si SPAD and is effectively insensitive to ageing of the LED and to its emission drifts related to temperature variations. The raw data pass all the statistical tests in national institute of standards and technology (NIST) tests suite and TestU01 Alphabit battery without a post processing algorithm. The maximum demonstrated bit rate is 1.68 Mbps with the efficiency of 4-bits per detected photon. In order to realize a small, portable QRNG, we have produced a compact configuration consisting of a Si nanocrystals (Si-NCs) LED and a SiPM. All the statistical test in the NIST tests suite pass for the raw data with the maximum bit rate of 0.5 Mbps. We also prepared and studied a compact chip consisting of a Si-NCs LED and an array of detectors. An integrated chip, composed of Si p+/n junction working in avalanche region and a Si SPAD, was produced as well. High quality random numbers are produced through our robust methodology at the highest speed of 100 kcps. Integration of the source of entropy and the detector on a single chip is an efficient way to produce a compact RNG. A small RNG is an essential element to guarantee the security of our everyday life. It can be readily implemented into electronic devices for data encryption. The idea of "utmost security" would no longer be limited to particular organs owning sensitive information. It would be accessible to every one in everyday life

    Tools for developing continuous-flow micro-mixer : numerical simulation of transitional flow in micro geometries and a quantitative technique for extracting dynamic information from micro-bubble images

    Get PDF
    Recent advance in the microfluidics including its fabrication technologies has led to many novel applications in micro-scale flows. Among them is the continuous-flow micromixer that utilizes the advantages associated with turbulent flows for rapid mixing, achieving the detection of fast kinetic reaction as short as tens of microseconds. However, for developing a high performance continuous-flow micromixer there are certain fundamental issues need to be solved. One of them is an universal simulation approach capable of calculating the flow field across entire passage for entire regime from very low Reynolds number laminar flow through transition to fully turbulent flow. Though the direct numerical simulation is potentially possible solution but its extremely high computing time stops itself from practical applications. The second major issue is the inevitable occurrence of cavitation bubbles in this rapid flow apparatus. This phenomenon has opposite effects: (a) deteriorating performance and damaging the micromixer; (b) playing a catalyst role in enhancing mixing. A fully understanding of these micro bubbles will provide a sound theoretical base for guiding the design of micromixer in order to explore the advantage to maximum while minimizing its disadvantages. Therefore, the objectives of this PhD programme is to study the tools that will effectively advance our fundamental understandings on these key issues while in short term fulfil the requires from the joint experimental PhD programme held in the life science faculty for designing a prototype experimental device. During this PhD study, an existing numerical approach suitable for predicting the possibly entire flow regime including the turbulence transition is proposed for simulating the microscale flows in the microchannel and micromixer. The simulation results are validated against the transitional micro-channel experiments and this numerical method is then further applied for the micromixer simulation. This provides the researcher a realistic and feasible CFD tool to establish guidelines for designing high-efficiency and cost-effective micromixers by utilizing various possible measures which may cause very different flows simultaneously in micromixer. In order to study microscale cavitation bubbles and their effects on micromixers, an innovative experimental setup is purposely designed and constructed that can generate laser-induced micro-bubbles at desired position and size for testing. Experiments withvarious micro-scale bubbles have been performed successfully by using an ultra high-speed camera up to 1 million frame rate per second. A novel technique for tracking the contours of micro-scale cavitation bubble dynamically has been developed by using active contour method. By using this technique, for the first time, various geometric and dynamic data of cavitation bubble have been obtained to quantitatively analyze the global behaviours of bubbles thoroughly. This powerful tool will greatly benefit the study of bubble dynamics and similar demands in other fields for fast and accurate image treatments as well

    Digital Design of New Chaotic Ciphers for Ethernet Traffic

    Get PDF
    Durante los últimos años, ha habido un gran desarrollo en el campo de la criptografía, y muchos algoritmos de encriptado así como otras funciones criptográficas han sido propuestos.Sin embargo, a pesar de este desarrollo, hoy en día todavía existe un gran interés en crear nuevas primitivas criptográficas o mejorar las ya existentes. Algunas de las razones son las siguientes:• Primero, debido el desarrollo de las tecnologías de la comunicación, la cantidad de información que se transmite está constantemente incrementándose. En este contexto, existen numerosas aplicaciones que requieren encriptar una gran cantidad de datos en tiempo real o en un intervalo de tiempo muy reducido. Un ejemplo de ello puede ser el encriptado de videos de alta resolución en tiempo real. Desafortunadamente, la mayoría de los algoritmos de encriptado usados hoy en día no son capaces de encriptar una gran cantidad de datos a alta velocidad mientras mantienen altos estándares de seguridad.• Debido al gran aumento de la potencia de cálculo de los ordenadores, muchos algoritmos que tradicionalmente se consideraban seguros, actualmente pueden ser atacados por métodos de “fuerza bruta” en una cantidad de tiempo razonable. Por ejemplo, cuando el algoritmo de encriptado DES (Data Encryption Standard) fue lanzado por primera vez, el tamaño de la clave era sólo de 56 bits mientras que, hoy en día, el NIST (National Institute of Standards and Technology) recomienda que los algoritmos de encriptado simétricos tengan una clave de, al menos, 112 bits. Por otro lado, actualmente se está investigando y logrando avances significativos en el campo de la computación cuántica y se espera que, en el futuro, se desarrollen ordenadores cuánticos a gran escala. De ser así, se ha demostrado que algunos algoritmos que se usan actualmente como el RSA (Rivest Shamir Adleman) podrían ser atacados con éxito.• Junto al desarrollo en el campo de la criptografía, también ha habido un gran desarrollo en el campo del criptoanálisis. Por tanto, se están encontrando nuevas vulnerabilidades y proponiendo nuevos ataques constantemente. Por consiguiente, es necesario buscar nuevos algoritmos que sean robustos frente a todos los ataques conocidos para sustituir a los algoritmos en los que se han encontrado vulnerabilidades. En este aspecto, cabe destacar que algunos algoritmos como el RSA y ElGamal están basados en la suposición de que algunos problemas como la factorización del producto de dos números primos o el cálculo de logaritmos discretos son difíciles de resolver. Sin embargo, no se ha descartado que, en el futuro, se puedan desarrollar algoritmos que resuelvan estos problemas de manera rápida (en tiempo polinomial).• Idealmente, las claves usadas para encriptar los datos deberían ser generadas de manera aleatoria para ser completamente impredecibles. Dado que las secuencias generadas por generadores pseudoaleatorios, PRNGs (Pseudo Random Number Generators) son predecibles, son potencialmente vulnerables al criptoanálisis. Por tanto, las claves suelen ser generadas usando generadores de números aleatorios verdaderos, TRNGs (True Random Number Generators). Desafortunadamente, los TRNGs normalmente generan los bits a menor velocidad que los PRNGs y, además, las secuencias generadas suelen tener peores propiedades estadísticas, lo que hace necesario que pasen por una etapa de post-procesado. El usar un TRNG de baja calidad para generar claves, puede comprometer la seguridad de todo el sistema de encriptado, como ya ha ocurrido en algunas ocasiones. Por tanto, el diseño de nuevos TRNGs con buenas propiedades estadísticas es un tema de gran interés.En resumen, es claro que existen numerosas líneas de investigación en el ámbito de la criptografía de gran importancia. Dado que el campo de la criptografía es muy amplio, esta tesis se ha centra en tres líneas de investigación: el diseño de nuevos TRNGs, el diseño de nuevos cifradores de flujo caóticos rápidos y seguros y, finalmente, la implementación de nuevos criptosistemas para comunicaciones ópticas Gigabit Ethernet a velocidades de 1 Gbps y 10 Gbps. Dichos criptosistemas han estado basados en los algoritmos caóticos propuestos, pero se han adaptado para poder realizar el encriptado en la capa física, manteniendo el formato de la codificación. De esta forma, se ha logrado que estos sistemas sean capaces no sólo de encriptar los datos sino que, además, un atacante no pueda saber si se está produciendo una comunicación o no. Los principales aspectos cubiertos en esta tesis son los siguientes:• Estudio del estado del arte, incluyendo los algoritmos de encriptado que se usan actualmente. En esta parte se analizan los principales problemas que presentan los algoritmos de encriptado standard actuales y qué soluciones han sido propuestas. Este estudio es necesario para poder diseñar nuevos algoritmos que resuelvan estos problemas.• Propuesta de nuevos TRNGs adecuados para la generación de claves. Se exploran dos diferentes posibilidades: el uso del ruido generado por un acelerómetro MEMS (Microelectromechanical Systems) y el ruido generado por DNOs (Digital Nonlinear Oscillators). Ambos casos se analizan en detalle realizando varios análisis estadísticos a secuencias obtenidas a distintas frecuencias de muestreo. También se propone y se implementa un algoritmo de post-procesado simple para mejorar la aleatoriedad de las secuencias generadas. Finalmente, se discute la posibilidad de usar estos TRNGs como generadores de claves. • Se proponen nuevos algoritmos de encriptado que son rápidos, seguros y que pueden implementarse usando una cantidad reducida de recursos. De entre todas las posibilidades, esta tesis se centra en los sistemas caóticos ya que, gracias a sus propiedades intrínsecas como la ergodicidad o su comportamiento similar al comportamiento aleatorio, pueden ser una buena alternativa a los sistemas de encriptado clásicos. Para superar los problemas que surgen cuando estos sistemas son digitalizados, se proponen y estudian diversas estrategias: usar un sistema de multi-encriptado, cambiar los parámetros de control de los sistemas caóticos y perturbar las órbitas caóticas.• Se implementan los algoritmos propuestos. Para ello, se usa una FPGA Virtex 7. Las distintas implementaciones son analizadas y comparadas, teniendo en cuenta diversos aspectos tales como el consumo de potencia, uso de área, velocidad de encriptado y nivel de seguridad obtenido. Uno de estos diseños, se elige para ser implementado en un ASIC (Application Specific Integrate Circuit) usando una tecnología de 0,18 um. En cualquier caso, las soluciones propuestas pueden ser también implementadas en otras plataformas y otras tecnologías.• Finalmente, los algoritmos propuestos se adaptan y aplican a comunicaciones ópticas Gigabit Ethernet. En particular, se implementan criptosistemas que realizan el encriptado al nivel de la capa física para velocidades de 1 Gbps y 10 Gbps. Para realizar el encriptado en la capa física, los algoritmos propuestos en las secciones anteriores se adaptan para que preserven el formato de la codificación, 8b/10b en el caso de 1 Gb Ethernet y 64b/10b en el caso de 10 Gb Ethernet. En ambos casos, los criptosistemas se implementan en una FPGA Virtex 7 y se diseña un set experimental, que incluye dos módulos SFP (Small Form-factor Pluggable) capaces de transmitir a una velocidad de hasta 10.3125 Gbps sobre una fibra multimodo de 850 nm. Con este set experimental, se comprueba que los sistemas de encriptado funcionan correctamente y de manera síncrona. Además, se comprueba que el encriptado es bueno (pasa todos los test de seguridad) y que el patrón del tráfico de datos está oculto.<br /

    Matter-light entanglement with cold atomic ensembles

    Get PDF
    In this thesis I present the investigations of matter-light entanglement in cold atomic samples. Particularly, entanglement of mixed species ensembles and bichromatic light fields is proposed and demonstrated experimentally. This approach avoids the use of two interferometrically separate paths for qubits entanglement distribution. I also present the first implementation of multiplexed quantum memory, and experimentally demonstrate entanglement involving arbitrary pairs of elements within this memory array. Finally, quantum interference of electromagnetic fields emitted by remote quantum memory elements separated by 5.5 m is realized.Ph.D.Committee Chair: Kuzmich, Alex; Committee Member: Chapman, Michael; Committee Member: Citrin, David; Committee Member: Kennedy, T. A. Brian; Committee Member: Raman, Chandr

    A review of mechanoluminescence in inorganic solids : compounds, mechanisms, models and applications

    Get PDF
    Mechanoluminescence (ML) is the non-thermal emission of light as a response to mechanical stimuli on a solid material. While this phenomenon has been observed for a long time when breaking certain materials, it is now being extensively explored, especially since the discovery of non-destructive ML upon elastic deformation. A great number of materials have already been identified as mechanoluminescent, but novel ones with colour tunability and improved sensitivity are still urgently needed. The physical origin of the phenomenon, which mainly involves the release of trapped carriers at defects with the help of stress, still remains unclear. This in turn hinders a deeper research, either theoretically or application oriented. In this review paper, we have tabulated the known ML compounds according to their structure prototypes based on the connectivity of anion polyhedra, highlighting structural features, such as framework distortion, layered structure, elastic anisotropy and microstructures, which are very relevant to the ML process. We then review the various proposed mechanisms and corresponding mathematical models. We comment on their contribution to a clearer understanding of the ML phenomenon and on the derived guidelines for improving properties of ML phosphors. Proven and potential applications of ML in various fields, such as stress field sensing, light sources, and sensing electric (magnetic) fields, are summarized. Finally, we point out the challenges and future directions in this active and emerging field of luminescence research

    Proceedings of the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    This book is a collection of 15 reviewed technical reports summarizing the presentations at the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory. The covered topics include image processing, optical signal processing, visual inspection, pattern recognition and classification, human-machine interaction, world and situation modeling, autonomous system localization and mapping, information fusion, and trust propagation in sensor networks

    Proceedings of the 5th International Workshop on Reconfigurable Communication-centric Systems on Chip 2010 - ReCoSoC\u2710 - May 17-19, 2010 Karlsruhe, Germany. (KIT Scientific Reports ; 7551)

    Get PDF
    ReCoSoC is intended to be a periodic annual meeting to expose and discuss gathered expertise as well as state of the art research around SoC related topics through plenary invited papers and posters. The workshop aims to provide a prospective view of tomorrow\u27s challenges in the multibillion transistor era, taking into account the emerging techniques and architectures exploring the synergy between flexible on-chip communication and system reconfigurability

    Optical code-division multiple access system and optical signal processing

    Get PDF
    This thesis presents our recent researches on the development of coding devices, the investigation of security and the design of systems in the optical cod-division multiple access (OCDMA) systems. Besides, the techniques of nonlinear signal processing used in the OCDMA systems fire our imagination, thus some researches on all-optical signal processing are carried out and also summarized in this thesis. Two fiber Bragg grating (FBG) based coding devices are proposed. The first coding device is a superstructured FBG (SSFBG) using ±π/2-phase shifts instead of conventional 0/π-phase shifts. The ±π/2-phase-shifted SSFBG en/decoders can not only conceal optical codes well in the encoded signals but also realize the reutilization of available codes by hybrid use with conventional 0/π-phase-shifted SSFBG en/decoders. The second FBG based coding device is synthesized by layer-peeling method, which can be used for simultaneous optical code recognition and chromatic dispersion compensation. Then, two eavesdropping schemes, one-bit delay interference detection and differential detection, are demonstrated to reveal the security vulnerability of differential phase-shift keying (DPSK) and code-shift keying (CSK) OCDMA systems. To address the security issue as well as increase the transmission capacity, an orthogonal modulation format based on DPSK and CSK is introduced into the OCDMA systems. A 2 bit/symbol 10 Gsymbol/s transmission system using the orthogonal modulation format is achieved. The security of the system can be partially guaranteed. Furthermore, a fully-asynchronous gigabit-symmetric OCDMA passive optical network (PON) is proposed, in which a self-clocked time gate is employed for signal regeneration. A remodulation scheme is used in the PON, which let downstream and upstream share the same optical carrier, allowing optical network units source-free. An error-free 4-user 10 Gbit/s/user duplex transmission over 50 km distance is reazlied. A versatile waveform generation scheme is then studied. A theoretical model is established and a waveform prediction algorithm is summarized. In the demonstration, various waveforms are generated including short pulse, trapezoidal, triangular and sawtooth waveforms and doublet pulse. ii In addition, an all-optical simultaneous half-addition and half-subtraction scheme is achieved at an operating rate of 10 GHz by using only two semiconductor optical amplifiers (SOA) without any assist light. Lastly, two modulation format conversion schemes are demonstrated. The first conversion is from NRZ-OOK to PSK-Manchester coding format using a SOA based Mach-Zehnder interferometer. The second conversion is from RZ-DQPSK to RZ-OOK by employing a supercontinuum based optical thresholder

    Characterisation and optimisation of the semiconductor optical amplifier for ultra-high speed performance

    Get PDF
    This research is in the area of high speed telecommunication systems where all- optical technologies are being introduced to meet the ever increasing demand for bandwidth by replacing the costly electro-optical conversion modules. In such systems, all-optical routers are the key technologies capable of supporting networks with high capacity/bandwidth as well as offering lower power consumption. One of the fundamental building blocks in all-optical routers/networks is the semiconductor optical amplifier (SOA), which is used in for clock extraction, wavelength conversion, all-optical gates and optical processing. The SOAs are perfect for optical amplification and optical switching at a very high speed. This is due to their small size, a low switching energy, non-linear characteristics and the seamless integration with other optical devices. Therefore, characterisation of the SOA operational functionalities and optimisation of its performance for amplification and switching are essential and challenging. Existing models on SOA gain dynamics do not address the impact of optical propagating wavelength, the combined input parameters and their adaptation for optimised amplification and switching operations. The SOA operation is limited at high data rates > 2.5 Gb/s to a greater extent by the gain recovery time. A number of schemes have been proposed to overcome this limitation; however no work has been reported on the SOA for improving the gain uniformity. This research aims to characterise the boundaries conditions and optimise the SOA performance for amplification and switching. The research also proposes alternative techniques to maximise the SOA gain uniformity at ultra-high speed data rates theoretically and practically. An SOA model is been developed and used throughout the research for theoretical simulations. Results show that the optimum conditions required to achieve the maximum output gain for best amplification performance depends on the SOA peak gain wavelength. It is also shown that the optimum phase shift of 180º for switching can be induced at lower input power level when the SOA biasing current is at its maximum limit. A gain standard deviation equation is introduced to measure the SOA gain uniformity. New wavelength diversity technique is proposed to achieve an average improvement of 7.82 dB in the SOA gain standard deviation at rates from 10 to 160 Gb/s. Other novel techniques that improved the gain uniformity employing triangular and sawtooth bias currents, as replacements for the uniform biasing, have been proposed. However, these current patterns were not able to improve the SOA gain uniformity at data rates beyond 40 Gb/s. For that reason, an optimised biasing for SOA (OBS) pattern is introduced to maximise the gain uniformity at any input data rates. This OBS pattern was practically generated and compared to the uniform biased SOA at different data rates and with different input bit sequences. All executed experiments showed better output uniformities employing the proposed OBS pattern with an average improvement of 19%
    corecore