191 research outputs found

    A theoretical study of ultrafast phenomena in complex atoms

    Full text link
    Tesis doctoral inédita leída en la Universidad Autónoma de Madrid, Facultad de Ciencias, Departamento de Química. Fecha de lectura: 15-11-2019Esta tesis tiene embargado el acceso al texto completo hasta el 15-05-2021The ultrafast movement of electrons is a driving force of chemical reactions, making it a highly desirable avenue for study. This thesis studies such movements, making use of pump-probe methods such as attosecond transient absorption spectroscopy (ATAS) and reconstruction of attosecond beatings by interference of two-photon transitions (RABITT), in complex atomic systems. The main approach used to solve the time-dependent Schr¨odinger equation (TDSE) was exact, attosecond, full-electron, ab-initio calculations. Firstly, helium was probed above the second ionisation threshold, where severalionisationchannelsareopen, usingaccurateab-initiocalculations. Here, the ATAS method was employed to predict beatings between the autoionising 3snp1Po resonances and nearby 1Se and 1De states. More surprisingly, twophoton beatings between the doubly-excited 3s3p state and the 1Po continuum werealsoobserved,demonstratingcontrolofthecorrelated,two-electron,multichannel wave packet. Secondly, two studies of neon were carried out below the second ionisation threshold. The first makes use of ATAS calculations to probe beatings between the autoionising neon states. Using a two-colour, mixed extreme-ultraviolet (XUV) near-infrared (NIR) pump, one-photon beatings between the 2s−13p1Po and the nearby 2s−13s1Se and 2s−13d1De resonances are observed. Further, oneand two-photonbeatings between the autoionising 2s−13`, `∈{0,1}and the 1Po continuum are predicted. The second uses the RABITT method to probe the atomic phase in the vicinity of multiple resonances. This is far from trivial, and interferometric methods have until now been restricted to simpler energy-regions, due to the difficulty of accurately describing the electron correlation associated with the more complex case, making accurate ab-initio calculations needed to guide experiments unavailable. Despite the complex energy-dependence of the phase when several resonances are present, presented results from experiment and abinitiotheoryareinexcellentagreement. Further,usingasimpleextensionofthe Fano model for resonant continua, the contributions of the different involved resonances are disentangled. Such simple models are highly desirable in more advanced systems, where accurate ab-initio calculations are inaccessible. The ab-initio results of both neon studies were carried out using the newly developed XCHEM methodology, which is thus further validated by the excellent agreement with presented experiments and previous studies. Finally,aRABITTstudyofargoninthevicinityofthe3s−1n` resonanceswas performed. Angularly resolved, experimental results are presented, showing the anisotropy of the atomic phase in smooth continua as well as the vicinity of resonances. Due to the complexity of the system,no ab-initio results a represent. Instead, simpler interferometric models are used to successfully explain the anisotropic behaviour of the phaseEl movimiento ultrarrápido de electrones es la fuerza motriz de las reacciones químicas, por lo cual su estudio resulta muy atractivo. Esta tesis se dedica al estudio de ese tipo de movimientos, utilizando métodos de bombeo y sonda, como espectroscopia de absorción transitoria de attosegundos (ATAS) y reconstrucción de ”beatings” de attosegundo por interferencia de transiciones de dos fotones (RABITT), en átomos complejos. El método principal utilizado para resolver de la ecuación de Schrödinger dependiente del tiempo fue la propagación exacta (ab-initio) considerando todos los electrones. En primer lugar, se investigó el átomo de helio por encima del segundo umbral de ionización, donde existen varios canales de ionización. Aquí, el método de ATAS se empleó para predecir beatings entre las resonancias 3snp1Po y estados 1Se y 1De cercanos. Sorprendentemente, también se observaron beatings de dos fotones, lo cual muestra control del paquete de ondas correlacionado multicanal de dos electrones. En segundo lugar, dos estudios por debajo del segundo umbral de ionización del neón se llevaron a cabo. El primero utiliza cálculos de ATAS para investigar los beatings entre estados auto ionizantes de neón. Utilizando un bombeo de dos colores, radiación ultravioleta extrema (XUV) mezclada con radiación infrarrojo cercano (NIR), es posible observar beatings entre la resonancia del 2s−13p1Po y las 2s−13s1Se y 2s−13d1De. Además, se predicen beatings de uno y dos fotones entre las resonancias auto ionizantes 2s−13`, `∈{0,1}y el continuo 1Po. El segundo usa el método de RABITT para estudiar la fase atómica en las cercanías de las resonancias múltiples. Hasta ahora, los métodos interferométricos han estado restringidos a regiones de energía de hasta una resonancia, a causa de las dificultades en llevar a cabo propagaciones exactas (ab-initio), las cuales dependen de la correlación electrónica para describir bien los experimentos. A pesar de la complejidad de la dependencia de la energíac con la fase, debido a la presencia de varias resonancias, los resultados teóricos obtenidos comparan muy bien con los resultados experimentales presentados. Además, usando una extensión del modelo de Fano para continuos resonantes, las contribuciones de las distintas resonancias se han podido resolver. Modelos más simples son necesarios en sistemas más avanzados, donde cálculos ab-initio son inaccesibles. Los resultados ab-initio presentados en ambos estudios se realizaron con el método XCHEM recientemente propuesto, dando así validez al método. Finalmente, se realizó un estudio RABITT cerca de las resonancias 3s−1n` del argón. Se presentan experimentos mostrando la dependencia angular de la fase atómica, tanto en continuos suaves como en las cercanías de resonancias. Debido a la complejidad del sistema, no se presentan resultados ab-initio. En cambio, mediante modelos interferométricos se ha podido explicar el comportamiento anisótropo de la fas

    Quantum Nonlocality

    Get PDF
    This book presents the current views of leading physicists on the bizarre property of quantum theory: nonlocality. Einstein viewed this theory as “spooky action at a distance” which, together with randomness, resulted in him being unable to accept quantum theory. The contributions in the book describe, in detail, the bizarre aspects of nonlocality, such as Einstein–Podolsky–Rosen steering and quantum teleportation—a phenomenon which cannot be explained in the framework of classical physics, due its foundations in quantum entanglement. The contributions describe the role of nonlocality in the rapidly developing field of quantum information. Nonlocal quantum effects in various systems, from solid-state quantum devices to organic molecules in proteins, are discussed. The most surprising papers in this book challenge the concept of the nonlocality of Nature, and look for possible modifications, extensions, and new formulations—from retrocausality to novel types of multiple-world theories. These attempts have not yet been fully successful, but they provide hope for modifying quantum theory according to Einstein’s vision

    Time-bin encoding for optical quantum computing

    Get PDF
    Scalability has been a longstanding issue in implementing large-scale photonic experiments for optical quantum computing. Traditional encodings based on the polarisation or spatial degrees of freedom become extremely resource-demanding when the number of modes becomes large, as the need for many nonclassical sources of light and the number of beam splitters required become unfeasible. Alternatively, time-bin encoding paves the way to overcome some of these limitations, as it only requires a single quantum light source and can be scaled to many temporal modes through judicious choice of pulse sequence and delays. Such an apparatus constitutes an important step toward large-scale experiments with low resource consumption. This work focuses on the time-bin encoding implementation. First, we assess its feasibility by thoroughly investigating its performance through numerical simulations under realistic conditions. We identify the critical components of the architecture and find that it can achieve performances comparable to state-of-the-art devices. Moreover, we consider two implementation approaches, in fibre and free space, and enumerate their strengths and weaknesses. Subsequently, we delve into the lab to explore these schemes and the key components involved therein. For the fibre case, we report the first implementation of time-bin encoded Gaussian boson sampling and use the samples obtained from the device to search for dense subgraphs of sizes three and four in a 10-node graph. Finally, we complement the study of the time-bin encoding with two side projects that contribute to the broad spectrum of enabling techniques for quantum information science. First, we demonstrate the ability to perform photon-number resolving measurements with a commercial superconducting nanowire single-photon detector system and apply it to improve the statistics of a heralded single-photon source. Second, we demonstrate that by employing a phase-tunable coherent state, we can fully characterise a multimode Gaussian state through solely the low-order photon statistics.Open Acces

    Computational imaging and automated identification for aqueous environments

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution June 2011Sampling the vast volumes of the ocean requires tools capable of observing from a distance while retaining detail necessary for biology and ecology, ideal for optical methods. Algorithms that work with existing SeaBED AUV imagery are developed, including habitat classi fication with bag-of-words models and multi-stage boosting for rock sh detection. Methods for extracting images of sh from videos of longline operations are demonstrated. A prototype digital holographic imaging device is designed and tested for quantitative in situ microscale imaging. Theory to support the device is developed, including particle noise and the effects of motion. A Wigner-domain model provides optimal settings and optical limits for spherical and planar holographic references. Algorithms to extract the information from real-world digital holograms are created. Focus metrics are discussed, including a novel focus detector using local Zernike moments. Two methods for estimating lateral positions of objects in holograms without reconstruction are presented by extending a summation kernel to spherical references and using a local frequency signature from a Riesz transform. A new metric for quickly estimating object depths without reconstruction is proposed and tested. An example application, quantifying oil droplet size distributions in an underwater plume, demonstrates the efficacy of the prototype and algorithms.Funding was provided by NOAA Grant #5710002014, NOAA NMFS Grant #NA17RJ1223, NSF Grant #OCE-0925284, and NOAA Grant #NA10OAR417008

    A Statistical Perspective of the Empirical Mode Decomposition

    Get PDF
    This research focuses on non-stationary basis decompositions methods in time-frequency analysis. Classical methodologies in this field such as Fourier Analysis and Wavelet Transforms rely on strong assumptions of the underlying moment generating process, which, may not be valid in real data scenarios or modern applications of machine learning. The literature on non-stationary methods is still in its infancy, and the research contained in this thesis aims to address challenges arising in this area. Among several alternatives, this work is based on the method known as the Empirical Mode Decomposition (EMD). The EMD is a non-parametric time-series decomposition technique that produces a set of time-series functions denoted as Intrinsic Mode Functions (IMFs), which carry specific statistical properties. The main focus is providing a general and flexible family of basis extraction methods with minimal requirements compared to those within the Fourier or Wavelet techniques. This is highly important for two main reasons: first, more universal applications can be taken into account; secondly, the EMD has very little a priori knowledge of the process required to apply it, and as such, it can have greater generalisation properties in statistical applications across a wide array of applications and data types. The contributions of this work deal with several aspects of the decomposition. The first set regards the construction of an IMF from several perspectives: (1) achieving a semi-parametric representation of each basis; (2) extracting such semi-parametric functional forms in a computationally efficient and statistically robust framework. The EMD belongs to the class of path-based decompositions and, therefore, they are often not treated as a stochastic representation. (3) A major contribution involves the embedding of the deterministic pathwise decomposition framework into a formal stochastic process setting. One of the assumptions proper of the EMD construction is the requirement for a continuous function to apply the decomposition. In general, this may not be the case within many applications. (4) Various multi-kernel Gaussian Process formulations of the EMD will be proposed through the introduced stochastic embedding. Particularly, two different models will be proposed: one modelling the temporal mode of oscillations of the EMD and the other one capturing instantaneous frequencies location in specific frequency regions or bandwidths. (5) The construction of the second stochastic embedding will be achieved with an optimisation method called the cross-entropy method. Two formulations will be provided and explored in this regard. Application on speech time-series are explored to study such methodological extensions given that they are non-stationary

    Radiative neutron capture cross section on 238U at the n_TOF CERN facility: a high precision measurement

    Get PDF
    The aim of this work is to provide a precise and accurate measurement of the 238U(n,gamma) reaction cross-section. This reaction is of fundamental importance for the design calculations of nuclear reactors, governing the behaviour of the reactor core. In particular, fast neutron reactors, which are experiencing a growing interest for their ability to burn radioactive waste, operate in the high energy region of the neutron spectrum. In this energy region inconsistencies between the existing measurements are present up to 15%, and the most recent evaluations disagree each other. In addition, the assessment of nuclear data uncertainty performed for innovative reactor systems shows that the uncertainty in the radiative capture cross-section of 238U should be further reduced to 1-3% in the energy region from 20 eV to 25 keV. To this purpose, addressed by the Nuclear Energy Agency as a priority nuclear data need, complementary experiments, one at the GELINA and two at the n_TOF facility, were scheduled within the ANDES project within the 7th Framework Project of the European Commission. The results of one of the 238U(n,gamma) measurement performed at the n_TOF CERN facility are presented in this work, carried out with a detection system constituted of two liquid scintillators. The very accurate cross section from this work is compared with the results obtained from the other measurement performed at the n_TOF facility, which exploit a different and complementary detection technique. The excellent agreement between the two data-sets points out that they can contribute to the reduction of the cross section uncertainty down to the required 1-3%

    Digital Signal Processing (Second Edition)

    Get PDF
    This book provides an account of the mathematical background, computational methods and software engineering associated with digital signal processing. The aim has been to provide the reader with the mathematical methods required for signal analysis which are then used to develop models and algorithms for processing digital signals and finally to encourage the reader to design software solutions for Digital Signal Processing (DSP). In this way, the reader is invited to develop a small DSP library that can then be expanded further with a focus on his/her research interests and applications. There are of course many excellent books and software systems available on this subject area. However, in many of these publications, the relationship between the mathematical methods associated with signal analysis and the software available for processing data is not always clear. Either the publications concentrate on mathematical aspects that are not focused on practical programming solutions or elaborate on the software development of solutions in terms of working ‘black-boxes’ without covering the mathematical background and analysis associated with the design of these software solutions. Thus, this book has been written with the aim of giving the reader a technical overview of the mathematics and software associated with the ‘art’ of developing numerical algorithms and designing software solutions for DSP, all of which is built on firm mathematical foundations. For this reason, the work is, by necessity, rather lengthy and covers a wide range of subjects compounded in four principal parts. Part I provides the mathematical background for the analysis of signals, Part II considers the computational techniques (principally those associated with linear algebra and the linear eigenvalue problem) required for array processing and associated analysis (error analysis for example). Part III introduces the reader to the essential elements of software engineering using the C programming language, tailored to those features that are used for developing C functions or modules for building a DSP library. The material associated with parts I, II and III is then used to build up a DSP system by defining a number of ‘problems’ and then addressing the solutions in terms of presenting an appropriate mathematical model, undertaking the necessary analysis, developing an appropriate algorithm and then coding the solution in C. This material forms the basis for part IV of this work. In most chapters, a series of tutorial problems is given for the reader to attempt with answers provided in Appendix A. These problems include theoretical, computational and programming exercises. Part II of this work is relatively long and arguably contains too much material on the computational methods for linear algebra. However, this material and the complementary material on vector and matrix norms forms the computational basis for many methods of digital signal processing. Moreover, this important and widely researched subject area forms the foundations, not only of digital signal processing and control engineering for example, but also of numerical analysis in general. The material presented in this book is based on the lecture notes and supplementary material developed by the author for an advanced Masters course ‘Digital Signal Processing’ which was first established at Cranfield University, Bedford in 1990 and modified when the author moved to De Montfort University, Leicester in 1994. The programmes are still operating at these universities and the material has been used by some 700++ graduates since its establishment and development in the early 1990s. The material was enhanced and developed further when the author moved to the Department of Electronic and Electrical Engineering at Loughborough University in 2003 and now forms part of the Department’s post-graduate programmes in Communication Systems Engineering. The original Masters programme included a taught component covering a period of six months based on two semesters, each Semester being composed of four modules. The material in this work covers the first Semester and its four parts reflect the four modules delivered. The material delivered in the second Semester is published as a companion volume to this work entitled Digital Image Processing, Horwood Publishing, 2005 which covers the mathematical modelling of imaging systems and the techniques that have been developed to process and analyse the data such systems provide. Since the publication of the first edition of this work in 2003, a number of minor changes and some additions have been made. The material on programming and software engineering in Chapters 11 and 12 has been extended. This includes some additions and further solved and supplementary questions which are included throughout the text. Nevertheless, it is worth pointing out, that while every effort has been made by the author and publisher to provide a work that is error free, it is inevitable that typing errors and various ‘bugs’ will occur. If so, and in particular, if the reader starts to suffer from a lack of comprehension over certain aspects of the material (due to errors or otherwise) then he/she should not assume that there is something wrong with themselves, but with the author
    corecore