145 research outputs found

    New techniques for imaging photon-counting and particle detectors

    Get PDF
    Since the advent of space-based astronomy in the early 1960's, there has been a need for space-qualified detectors with sufficient sensitivity and resolution to detect and image single photons, ions or electrons. This thesis describes a research programme to develop detectors that fulfil these requirements. I begin by describing the role of detectors in space astronomy and follow with a review of detector technologies, with particular emphasis on imaging techniques. Conductive charge division image readouts offer high performance, simplicity, and flexibility and their potential is investigated in both theory and practice. I introduce the basic design concept and discuss the fundamental factors limiting performance in relation to physical design and to underlying physical processes. Readout manufacturing techniques are reviewed and a novel method presented. I describe specific space and ground-based readout applications which proved valuable in teaching lessons and raising questions. These questions initiated an experimental programme, whose goals were to understand limiting physical processes and find techniques to overcome them. Results are presented, and the innovation of the progressive geometry readout technique, which this programme also spawned, is described. Progressive geometry readout devices, such as the Vernier anode, offer dramatically improved performance and have been successfully flight-proven. I describe the development of a Vernier readout for the J-PEX sounding rocket experiment, and discuss the instrument calibration and the flight programme. First investigations into a next generation of charge division readout design are presented. These devices will use charge comparison instead of amplitude measurement to further enhance resolution and count rate capability. In conclusion, I summarize the advances made during the course of this research, and discuss ongoing technological developments and further work which will enable MCP detectors to continue to excel where characteristics such as true photon-counting ability, high spatial resolution, format flexibility, and high temporal resolution are required

    Nano-intrinsic security primitives for internet of everything

    Get PDF
    With the advent of Internet-enabled electronic devices and mobile computer systems, maintaining data security is one of the most important challenges in modern civilization. The innovation of physically unclonable functions (PUFs) shows great potential for enabling low-cost low-power authentication, anti-counterfeiting and beyond on the semiconductor chips. This is because secrets in a PUF are hidden in the randomness of the physical properties of desirably identical devices, making it extremely difficult, if not impossible, to extract them. Hence, the basic idea of PUF is to take advantage of inevitable non-idealities in the physical domain to create a system that can provide an innovative way to secure device identities, sensitive information, and their communications. While the physical variation exists everywhere, various materials, systems, and technologies have been considered as the source of unpredictable physical device variation in large scales for generating security primitives. The purpose of this project is to develop emerging solid-state memory-based security primitives and examine their robustness as well as feasibility. Firstly, the author gives an extensive overview of PUFs. The rationality, classification, and application of PUF are discussed. To objectively compare the quality of PUFs, the author formulates important PUF properties and evaluation metrics. By reviewing previously proposed constructions ranging from conventional standard complementary metal-oxide-semiconductor (CMOS) components to emerging non-volatile memories, the quality of different PUFs classes are discussed and summarized. Through a comparative analysis, emerging non-volatile redox-based resistor memories (ReRAMs) have shown the potential as promising candidates for the next generation of low-cost, low-power, compact in size, and secure PUF. Next, the author presents novel approaches to build a PUF by utilizing concatenated two layers of ReRAM crossbar arrays. Upon concatenate two layers, the nonlinear structure is introduced, and this results in the improved uniformity and the avalanche characteristic of the proposed PUF. A group of cell readout method is employed, and it supports a massive pool of challenge-response pairs of the nonlinear ReRAM-based PUF. The non-linear PUF construction is experimentally assessed using the evaluation metrics, and the quality of randomness is verified using predictive analysis. Last but not least, random telegraph noise (RTN) is studied as a source of entropy for a true random number generation (TRNG). RTN is usually considered a disadvantageous feature in the conventional CMOS designs. However, in combination with appropriate readout scheme, RTN in ReRAM can be used as a novel technique to generate quality random numbers. The proposed differential readout-based design can maintain the quality of output by reducing the effect of the undesired noise from the whole system, while the controlling difficulty of the conventional readout method can be significantly reduced. This is advantageous as the differential readout circuit can embrace the resistance variation features of ReRAMs without extensive pre-calibration. The study in this thesis has the potential to enable the development of cost-efficient and lightweight security primitives that can be integrated into modern computer mobile systems and devices for providing a high level of security

    Algebraic graph theoretic applications to cryptography.

    Get PDF
    Master of Science in Mathematics. University of KwaZulu-Natal, Durban, 2015.Abstract available in PDF file

    Reconstructing Dynamical Systems From Stochastic Differential Equations to Machine Learning

    Get PDF
    Die Modellierung komplexer Systeme mit einer großen Anzahl von Freiheitsgraden ist in den letzten Jahrzehnten zu einer großen Herausforderung geworden. In der Regel werden nur einige wenige Variablen komplexer Systeme in Form von gemessenen Zeitreihen beobachtet, während die meisten von ihnen - die möglicherweise mit den beobachteten Variablen interagieren - verborgen bleiben. In dieser Arbeit befassen wir uns mit dem Problem der Rekonstruktion und Vorhersage der zugrunde liegenden Dynamik komplexer Systeme mit Hilfe verschiedener datengestützter Ansätze. Im ersten Teil befassen wir uns mit dem umgekehrten Problem der Ableitung einer unbekannten Netzwerkstruktur komplexer Systeme, die Ausbreitungsphänomene widerspiegelt, aus beobachteten Ereignisreihen. Wir untersuchen die paarweise statistische Ähnlichkeit zwischen den Sequenzen von Ereigniszeitpunkten an allen Knotenpunkten durch Ereignissynchronisation (ES) und Ereignis-Koinzidenz-Analyse (ECA), wobei wir uns auf die Idee stützen, dass funktionale Konnektivität als Stellvertreter für strukturelle Konnektivität dienen kann. Im zweiten Teil konzentrieren wir uns auf die Rekonstruktion der zugrunde liegenden Dynamik komplexer Systeme anhand ihrer dominanten makroskopischen Variablen unter Verwendung verschiedener stochastischer Differentialgleichungen (SDEs). In dieser Arbeit untersuchen wir die Leistung von drei verschiedenen SDEs - der Langevin-Gleichung (LE), der verallgemeinerten Langevin-Gleichung (GLE) und dem Ansatz der empirischen Modellreduktion (EMR). Unsere Ergebnisse zeigen, dass die LE bessere Ergebnisse für Systeme mit schwachem Gedächtnis zeigt, während sie die zugrunde liegende Dynamik von Systemen mit Gedächtniseffekten und farbigem Rauschen nicht rekonstruieren kann. In diesen Situationen sind GLE und EMR besser geeignet, da die Wechselwirkungen zwischen beobachteten und unbeobachteten Variablen in Form von Speichereffekten berücksichtigt werden. Im letzten Teil dieser Arbeit entwickeln wir ein Modell, das auf dem Echo State Network (ESN) basiert und mit der PNF-Methode (Past Noise Forecasting) kombiniert wird, um komplexe Systeme in der realen Welt vorherzusagen. Unsere Ergebnisse zeigen, dass das vorgeschlagene Modell die entscheidenden Merkmale der zugrunde liegenden Dynamik der Klimavariabilität erfasst.Modeling complex systems with large numbers of degrees of freedom have become a grand challenge over the past decades. Typically, only a few variables of complex systems are observed in terms of measured time series, while the majority of them – which potentially interact with the observed ones - remain hidden. Throughout this thesis, we tackle the problem of reconstructing and predicting the underlying dynamics of complex systems using different data-driven approaches. In the first part, we address the inverse problem of inferring an unknown network structure of complex systems, reflecting spreading phenomena, from observed event series. We study the pairwise statistical similarity between the sequences of event timings at all nodes through event synchronization (ES) and event coincidence analysis (ECA), relying on the idea that functional connectivity can serve as a proxy for structural connectivity. In the second part, we focus on reconstructing the underlying dynamics of complex systems from their dominant macroscopic variables using different Stochastic Differential Equations (SDEs). We investigate the performance of three different SDEs – the Langevin Equation (LE), Generalized Langevin Equation (GLE), and the Empirical Model Reduction (EMR) approach in this thesis. Our results reveal that LE demonstrates better results for systems with weak memory while it fails to reconstruct underlying dynamics of systems with memory effects and colored-noise forcing. In these situations, the GLE and EMR are more suitable candidates since the interactions between observed and unobserved variables are considered in terms of memory effects. In the last part of this thesis, we develop a model based on the Echo State Network (ESN), combined with the past noise forecasting (PNF) method, to predict real-world complex systems. Our results show that the proposed model captures the crucial features of the underlying dynamics of climate variability

    Development of a silicon photomultiplier based innovative and low cost positron emission tomography scanner.

    Get PDF
    The Silicon Photomultiplier (SiPM) is a state-of-the-art semiconductor photodetector consisting of a high density matrix (up to 104) of independent pixels of micro-metric dimension (from 10 \u3bcm to 100 \u3bcm) which form a macroscopic unit of 1 to 6 mm2 area. Each pixel is a single-photon avalanche diode operated with a bias voltage of a few volts above the breakdown voltage. When a charge carrier is generated in a pixel by an incoming photon or a thermal effect, a Geiger discharge confined to that pixel is initiated and an intrinsic gain of about 106 is obtained. The output signal of a pixel is the same regardless of the number of interacting photons and provide only a binary information. Since the pixels are arranged on a common Silicon substrate and are connected in parallel to the same readout line, the SiPM combined output response corresponds to the sum of all fired pixel currents. As a result, the SiPM as a whole is an analogue detector, which can measure the incoming light intensity. Nowadays a great number of companies are investing increasing efforts in SiPM detector performances and high quality mass production. SiPMs are in rapid evolution and benefit from the tremendous development of the Silicon technology in terms of cost production, design flexibility and performances. They have reached a high single photon detection sensitivity and photon detection efficiency, an excellent time resolution, an extended dynamic range. They require a low bias voltage and have a low power consumption, they are very compact, robust, flexible and cheap. Considering also their intrinsic insensitivity to magnetic field they result to have an extremely high potential in fundamental and applied science (particle and nuclear physics, astrophysics, biology, environmental science and nuclear medicine) and industry. The SiPM performances are influenced by some effects, as saturation, afterpulsing and crosstalk, which lead to an inherent non-proportional response with respect to the number of incident photons. Consequently, it is not trivial to relate the measured electronic signal to the corresponding light intensity. Since for most applications it is desirable to qualify the SiPM response (i.e in order to properly design a detector for a given application, perform corrections on measurements or on energy spectra, calibrate a SiPM for low light measurements, predict detector performance) the implementation of characterization procedures plays a key role. The SiPM field of application that has been considered in this thesis is the Positron Emission Tomography (PET). PET represents the most advanced in-vivo nuclear imaging modality: it provides functional information of the physiological and molecular processes of organs and tissues. Thanks to its diagnostic power, PET has a recognized superiority over all other imaging modalities in oncology, neurology and cardiology. SiPMs are usually successfully employed for the PET scanners because they allow the measurement of the Time Of Flight of the two coincidence photons to improve the signal to noise ratio of the reconstructed images. They also permit to perfectly combine the functional information with the anatomical one by inserting the PET scanner inside the Magnetic Resonance Imaging device. Recently, PET technology has also been applied to preclinical imaging to allow non invasive studies on small animals. The increasing demand for preclinical PET scanner is driven by the fact that small animals host a large number of human diseases. In-vivo imaging has the advantage to enable the measurement of the radiopharmaceutical distribution in the same animal for an extended period of time. As a result, PET represents a powerful research tool as it offers the possibility to study the abnormalities at the origin of a disease, understand its dynamics, evaluate the therapeutic response and develop new drugs and treatments. However, the cost and the complexity of the preclinical scanners are limiting factors for the spread of PET technology: 70-80% of small-animal PET is concentrated in academic or government research laboratories. The EasyPET concept proposed in this Thesis, protected under a patent filed by Aveiro University, aims to achieve a simple and affordable preclinical PET scanner. The innovative concept is based on a single pair of detector kept collinear during the whole data acquisition and a moving mechanism with two degrees of freedom to reproduce the functionalities of an entire PET ring. The main advantages are in terms of the reduction of the complexity and cost of the PET system. In addition the concept is bound to be robust against acollinear photoemission, scatter radiation and parallax error. The sensitivity is expected to represent a fragility due to the reduced geometrical acceptance. This drawback can be partially recovered by the possibility to accept Compton scattering events without introducing image degradation effects, thanks to the sensor alignment. A 2D imaging demonstrator has been realized in order to assess the EasyPET concept and its performance has been analyzed in this Thesis to verify the net balance between competing advantages and drawbacks. The demonstrator had a leading role in the outreach activity to promote the EasyPET concept and a significant outcome is represented by the new partners that recently joined the collaboration. The EasyPET has been licensed to Caen S.p.a. and, thanks to the participation of Nuclear Instruments to the electronic board re-designed, a new prototype has been realized with additional improvements concerning the mechanics and the control software. In this Thesis the prototype functionalities and performances are reported as a result of a commissioning procedure. The EasyPET will be commercialized by Caen S.p.a. as a product for the educational market and it will be addressed to high level didactic laboratories to show the operating principles and technology behind the PET imaging. The topics mentioned above will be examined in depth in the following Chapters according to the subsequent order. In Chapter 1 the Silicon Photomultiplier will be described in detail, from their operating principle to their main application fields passing through the advantages and the drawback effects connected with this type of sensor. Chapter 2 is dedicated to a SiPM standard characterization method based on the staircase and resolving power measurement. A more refined analysis involves the Multi-Photon spectrum, obtained by integrating the SiPM response to a light pulse. It exploits the SiPM single photon sensitivity and its photon number resolving capability to measure some of its properties of general interest for a multitude of potential applications, disentangling the features related to the statistics of the incident light. Chapter 3 reports another SiPM characterization method which implements a post-processing of the digitized SiPM waveforms with the aim of extracting a full picture of the sensor characteristics from a unique data-set. The procedure is very robust, effective and semi-automatic and suitable for sensors of various dimensions and produced by different vendors. Chapter 4 introduces the Positron Emission Tomography imaging: its principle, applications, related issues and state of the art of PET scanner will be explained. Chapter 5 deals with the preclinical PET, reporting the benefits and the technological challenges involved, the performance of the commercially available small animal PET scanners, the main applications and the frontier research in this field. In Chapter 6 the EasyPET concept is introduced. In particular, the basic idea behind the operating principle, the design layout and the image reconstruction will be illustrated and then assessed through the description and the performance analysis of the EasyPET proof of concept and demonstrator. The effect of the use of different sensor to improve the light collection and the coincidence detection efficiency, together with the analysis of the importance of the sensor and the crystal alignment will be reported in Chapter 7. The design, the functionalities and the commissioning of the EasyPET prototype addressed to the educational market will be defined in Chapter 8. Finally, Chapter 9 contains a summary of the conclusions and an outlook of the future research studies
    corecore