1,090 research outputs found

    Mesoscopic Physics of Quantum Systems and Neural Networks

    Get PDF
    We study three different kinds of mesoscopic systems – in the intermediate region between macroscopic and microscopic scales consisting of many interacting constituents: We consider particle entanglement in one-dimensional chains of interacting fermions. By employing a field theoretical bosonization calculation, we obtain the one-particle entanglement entropy in the ground state and its time evolution after an interaction quantum quench which causes relaxation towards non-equilibrium steady states. By pushing the boundaries of the numerical exact diagonalization and density matrix renormalization group computations, we are able to accurately scale to the thermodynamic limit where we make contact to the analytic field theory model. This allows to fix an interaction cutoff required in the continuum bosonization calculation to account for the short range interaction of the lattice model, such that the bosonization result provides accurate predictions for the one-body reduced density matrix in the Luttinger liquid phase. Establishing a better understanding of how to control entanglement in mesoscopic systems is also crucial for building qubits for a quantum computer. We further study a popular scalable qubit architecture that is based on Majorana zero modes in topological superconductors. The two major challenges with realizing Majorana qubits currently lie in trivial pseudo-Majorana states that mimic signatures of the topological bound states and in strong disorder in the proposed topological hybrid systems that destroys the topological phase. We study coherent transport through interferometers with a Majorana wire embedded into one arm. By combining analytical and numerical considerations, we explain the occurrence of an amplitude maximum as a function of the Zeeman field at the onset of the topological phase – a signature unique to MZMs – which has recently been measured experimentally [Whiticar et al., Nature Communications, 11(1):3212, 2020]. By placing an array of gates in proximity to the nanowire, we made a fruitful connection to the field of Machine Learning by using the CMA-ES algorithm to tune the gate voltages in order to maximize the amplitude of coherent transmission. We find that the algorithm is capable of learning disorder profiles and even to restore Majorana modes that were fully destroyed by strong disorder by optimizing a feasible number of gates. Deep neural networks are another popular machine learning approach which not only has many direct applications to physical systems but which also behaves similarly to physical mesoscopic systems. In order to comprehend the effects of the complex dynamics from the training, we employ Random Matrix Theory (RMT) as a zero-information hypothesis: before training, the weights are randomly initialized and therefore are perfectly described by RMT. After training, we attribute deviations from these predictions to learned information in the weight matrices. Conducting a careful numerical analysis, we verify that the spectra of weight matrices consists of a random bulk and a few important large singular values and corresponding vectors that carry almost all learned information. By further adding label noise to the training data, we find that more singular values in intermediate parts of the spectrum contribute by fitting the randomly labeled images. Based on these observations, we propose a noise filtering algorithm that both removes the singular values storing the noise and reverts the level repulsion of the large singular values due to the random bulk

    Configurable EBEN: Extreme Bandwidth Extension Network to enhance body-conducted speech capture

    Full text link
    This paper presents a configurable version of Extreme Bandwidth Extension Network (EBEN), a Generative Adversarial Network (GAN) designed to improve audio captured with body-conduction microphones. We show that although these microphones significantly reduce environmental noise, this insensitivity to ambient noise happens at the expense of the bandwidth of the speech signal acquired by the wearer of the devices. The obtained captured signals therefore require the use of signal enhancement techniques to recover the full-bandwidth speech. EBEN leverages a configurable multiband decomposition of the raw captured signal. This decomposition allows the data time domain dimensions to be reduced and the full band signal to be better controlled. The multiband representation of the captured signal is processed through a U-Net-like model, which combines feature and adversarial losses to generate an enhanced speech signal. We also benefit from this original representation in the proposed configurable discriminators architecture. The configurable EBEN approach can achieve state-of-the-art enhancement results on synthetic data with a lightweight generator that allows real-time processing.Comment: Accepted in IEEE/ACM Transactions on Audio, Speech and Language Processing on 14/08/202

    Enhancing Mesh Deformation Realism: Dynamic Mesostructure Detailing and Procedural Microstructure Synthesis

    Get PDF
    Propomos uma solução para gerar dados de mapas de relevo dinâmicos para simular deformações em superfícies macias, com foco na pele humana. A solução incorpora a simulação de rugas ao nível mesoestrutural e utiliza texturas procedurais para adicionar detalhes de microestrutura estáticos. Oferece flexibilidade além da pele humana, permitindo a geração de padrões que imitam deformações em outros materiais macios, como couro, durante a animação. As soluções existentes para simular rugas e pistas de deformação frequentemente dependem de hardware especializado, que é dispendioso e de difícil acesso. Além disso, depender exclusivamente de dados capturados limita a direção artística e dificulta a adaptação a mudanças. Em contraste, a solução proposta permite a síntese dinâmica de texturas que se adaptam às deformações subjacentes da malha de forma fisicamente plausível. Vários métodos foram explorados para sintetizar rugas diretamente na geometria, mas sofrem de limitações como auto-interseções e maiores requisitos de armazenamento. A intervenção manual de artistas na criação de mapas de rugas e mapas de tensão permite controle, mas pode ser limitada em deformações complexas ou onde maior realismo seja necessário. O nosso trabalho destaca o potencial dos métodos procedimentais para aprimorar a geração de padrões de deformação dinâmica, incluindo rugas, com maior controle criativo e sem depender de dados capturados. A incorporação de padrões procedimentais estáticos melhora o realismo, e a abordagem pode ser estendida além da pele para outros materiais macios.We propose a solution for generating dynamic heightmap data to simulate deformations for soft surfaces, with a focus on human skin. The solution incorporates mesostructure-level wrinkles and utilizes procedural textures to add static microstructure details. It offers flexibility beyond human skin, enabling the generation of patterns mimicking deformations in other soft materials, such as leater, during animation. Existing solutions for simulating wrinkles and deformation cues often rely on specialized hardware, which is costly and not easily accessible. Moreover, relying solely on captured data limits artistic direction and hinders adaptability to changes. In contrast, our proposed solution provides dynamic texture synthesis that adapts to underlying mesh deformations. Various methods have been explored to synthesize wrinkles directly to the geometry, but they suffer from limitations such as self-intersections and increased storage requirements. Manual intervention by artists using wrinkle maps and tension maps provides control but may be limited to the physics-based simulations. Our research presents the potential of procedural methods to enhance the generation of dynamic deformation patterns, including wrinkles, with greater creative control and without reliance on captured data. Incorporating static procedural patterns improves realism, and the approach can be extended to other soft-materials beyond skin

    Learning-based Wavelet-like Transforms For Fully Scalable and Accessible Image Compression

    Full text link
    The goal of this thesis is to improve the existing wavelet transform with the aid of machine learning techniques, so as to enhance coding efficiency of wavelet-based image compression frameworks, such as JPEG 2000. In this thesis, we first propose to augment the conventional base wavelet transform with two additional learned lifting steps -- a high-to-low step followed by a low-to-high step. The high-to-low step suppresses aliasing in the low-pass band by using the detail bands at the same resolution, while the low-to-high step aims to further remove redundancy from detail bands by using the corresponding low-pass band. These two additional steps reduce redundancy (notably aliasing information) amongst the wavelet subbands, and also improve the visual quality of reconstructed images at reduced resolutions. To train these two networks in an end-to-end fashion, we develop a backward annealing approach to overcome the non-differentiability of the quantization and cost functions during back-propagation. Importantly, the two additional networks share a common architecture, named a proposal-opacity topology, which is inspired and guided by a specific theoretical argument related to geometric flow. This particular network topology is compact and with limited non-linearities, allowing a fully scalable system; one pair of trained network parameters are applied for all levels of decomposition and for all bit-rates of interest. By employing the additional lifting networks within the JPEG2000 image coding standard, we can achieve up to 17.4% average BD bit-rate saving over a wide range of bit-rates, while retaining the quality and resolution scalability features of JPEG2000. Built upon the success of the high-to-low and low-to-high steps, we then study more broadly the extension of neural networks to all lifting steps that correspond to the base wavelet transform. The purpose of this comprehensive study is to understand what is the most effective way to develop learned wavelet-like transforms for highly scalable and accessible image compression. Specifically, we examine the impact of the number of learned lifting steps, the number of layers and the number of channels in each learned lifting network, and kernel support in each layer. To facilitate the study, we develop a generic training methodology that is simultaneously appropriate to all lifting structures considered. Experimental results ultimately suggest that to improve the existing wavelet transform, it is more profitable to augment a larger wavelet transform with more diverse high-to-low and low-to-high steps, rather than developing deep fully learned lifting structures

    Large-Scale surveys for continuous gravitational waves: from data preparation to multi-stage hierarchical follow-ups

    Get PDF
    The gravitational wave event GW150914 was the first direct detection of gravitational waves roughly 100 years after their prediction by Albert Einstein. The detection was a breakthrough, opening another channel to observe the Universe. Since then over 90 detections of merging compact objects have been made, most of them coalescences of binary black holes of different masses. There have been two black hole-neutron star, and two binary neutron-star mergers. Another breakthrough was the first binary neutron-star merger, GW170817, associated with a slew of electromagnetic observations, including a gamma-ray burst 1.7s after the merger. Compact binary coalescence events are cataclysmic events in which multiple solar masses are emitted in gravitational waves in ~seconds. Still, their gravitational wave detection requires sophisticated measuring devices: kilometer-scale laser interferometers. Another not yet detected form of gravitational radiation are continuous gravitational waves from e.g., but not limited to, fast-spinning neutron stars nonaxisymmetric relatively to their rotational axis. The gravitational wave amplitude on Earth is orders of magnitude weaker than the compact binary coalescence events, but, in the case of the nonaxisymmetric neutron star, is emitted as long as the neutron star is spinning and sustaining the deformation, which may be months to years. The gravitational wave is mostly emitted at twice the rotational frequency, with a possible frequency evolution (spin-down) due to the energy emitted by gravitational waves, as well as other braking mechanisms. This nearly monochromatic continuous wave is received by observers on Earth Doppler modulated by Earth's orbit and spin. Although the waveform is seemingly simple, the detection problem for signals from unknown sources is very challenging. The all-sky search for unknown neutron stars in our galaxy detailed in this work used the volunteer distributed computing project Einstein@Home and the ATLAS supercomputer for several months, taking tens of thousands of total CPU-time years to complete. In this work I describe the full-scale data analysis procedure, including data preparation, search set-up optimization and post-processing of search results, whose design and implementation is the core of my doctoral research work. I also present a number of observational results that demonstrate the real-world application of the methodologies that I designed.Das Gravitationswellenereignis GW150914 war der erste direkte Nachweis von Gravitationswellen rund 100 Jahre nach deren Vorhersage durch Albert Einstein. Die Entdeckung war ein Durchbruch und eröffnete einen weiteren Kanal zur Beobachtung des Universums. Seitdem wurden über 90 weitere verschmelzende kompakte Objekte entdeckt, die meisten binäre schwarze Löcher unterschiedlicher Masse, aber auch zweimal verschmelzende Schwarze Löcher mit Neutronensternen und zwei Verschmelzungen von binären Neutronensternen. Ein weiterer Durchbruch war die Beobachtung der ersten Verschmelzung zweier Neutronensterne, GW170817, die mit einer Reihe von elektromagnetischen Beobachtungen einherging, darunter ein Gammastrahlenausbruch 1.7s nach der Verschmelzung. Bei der Verschmelzung kompakter Objekte handelt es sich um kataklysmische Ereignisse, bei denen innerhalb von ~Sekunden mehrere Sonnenmassen in Form von Gravitationswellen ausgestoßen werden. Ihr Nachweis erfordert jedoch hochentwickelte Messgeräte: Laserinterferometer im Kilometermaßstab. Eine weitere, noch nicht nachgewiesene Form der Gravitationsstrahlung sind kontinuierliche Gravitationswellen, die z.B., aber nicht nur, von schnell rotierenden Neutronensternen ausgehen, die relativ zu ihrer Rotationsachse nicht achsensymmetrisch sind. Die Amplitude der kontinuierlichen Gravitationswellen auf der Erde ist um Größenordnungen schwächer als die der verschmelzenden kompakten Objekte, wird aber im Fall des nicht achsensymmetrischen Neutronensterns so lange abgestrahlt, wie der Neutronenstern rotiert und die Deformation aufrechterhält, was Monate bis Jahre sein können. Die Gravitationswelle wird meist mit der doppelten Rotationsfrequenz ausgestrahlt, wobei eine Frequenzentwicklung (Spin-down) aufgrund der von Gravitationswellen ausgesandten Energie, sowie anderer Bremsmechanismen möglich ist. Diese nahezu monochromatische, kontinuierliche Welle wird von einem Beobachter auf der Erde Doppler-moduliert durch die Erdumlaufbahn und die Erddrehung empfangen. Obwohl die Wellenform scheinbar einfach ist, ist das Problem des Nachweises von Signalen aus unbekannten Quellen eine große Herausforderung. Die in dieser Arbeit beschriebene Suche nach unbekannten Neutronensternen in unserer Galaxie über den kompletten Himmel verwendete über mehrere Monate hinweg das Volunteer-Computing-Projekt Einstein@Home und den ATLAS-Supercomputer und benötigte insgesamt Zehntausende von Jahren an Rechenzeit. In dieser Arbeit beschreibe ich das vollständige Datenanalyseverfahren einschließlich der Datenvorbereitung, der Optimierung der Suchparameter und der Nachbearbeitung der Suchergebnisse, dessen Entwurf und Implementierung das Kernstück meiner Doktorarbeit darstellt. Außerdem stelle ich eine Reihe von Beobachtungsergebnissen vor, welche die praktische Anwendung der von mir entwickelten Methoden demonstrieren

    Modelling, Dimensioning and Optimization of 5G Communication Networks, Resources and Services

    Get PDF
    This reprint aims to collect state-of-the-art research contributions that address challenges in the emerging 5G networks design, dimensioning and optimization. Designing, dimensioning and optimization of communication networks resources and services have been an inseparable part of telecom network development. The latter must convey a large volume of traffic, providing service to traffic streams with highly differentiated requirements in terms of bit-rate and service time, required quality of service and quality of experience parameters. Such a communication infrastructure presents many important challenges, such as the study of necessary multi-layer cooperation, new protocols, performance evaluation of different network parts, low layer network design, network management and security issues, and new technologies in general, which will be discussed in this book

    Advanced VLBI Imaging

    Get PDF
    Very Long Baseline Interferometry (VLBI) is an observational technique developed in astronomy for combining multiple radio telescopes into a single virtual instrument with an effective aperture reaching up to many thousand kilometers and enabling measurements at highest angular resolutions. The celebrated examples of applying VLBI to astrophysical studies include detailed, high-resolution images of the innermost parts of relativistic outflows (jets) in active galactic nuclei (AGN) and recent pioneering observations of the shadows of supermassive black holes (SMBH) in the center of our Galaxy and in the galaxy M87. Despite these and many other proven successes of VLBI, analysis and imaging of VLBI data still remain difficult, owing in part to the fact that VLBI imaging inherently constitutes an ill-posed inverse problem. Historically, this problem has been addressed in radio interferometry by the CLEAN algorithm, a matching-pursuit inverse modeling method developed in the early 1970-s and since then established as a de-facto standard approach for imaging VLBI data. In recent years, the constantly increasing demand for improving quality and fidelity of interferometric image reconstruction has resulted in several attempts to employ new approaches, such as forward modeling and Bayesian estimation, for application to VLBI imaging. While the current state-of-the-art forward modeling and Bayesian techniques may outperform CLEAN in terms of accuracy, resolution, robustness, and adaptability, they also tend to require more complex structure and longer computation times, and rely on extensive finetuning of a larger number of non-trivial hyperparameters. This leaves an ample room for further searches for potentially more effective imaging approaches and provides the main motivation for this dissertation and its particular focusing on the need to unify algorithmic frameworks and to study VLBI imaging from the perspective of inverse problems in general. In pursuit of this goal, and based on an extensive qualitative comparison of the existing methods, this dissertation comprises the development, testing, and first implementations of two novel concepts for improved interferometric image reconstruction. The concepts combine the known benefits of current forward modeling techniques, develop more automatic and less supervised algorithms for image reconstruction, and realize them within two different frameworks. The first framework unites multiscale imaging algorithms in the spirit of compressive sensing with a dictionary adapted to the uv-coverage and its defects (DoG-HiT, DoB-CLEAN). We extend this approach to dynamical imaging and polarimetric imaging. The core components of this framework are realized in a multidisciplinary and multipurpose software MrBeam, developed as part of this dissertation. The second framework employs a multiobjective genetic evolutionary algorithm (MOEA/D) for the purpose of achieving fully unsupervised image reconstruction and hyperparameter optimization. These new methods are shown to outperform the existing methods in various metrics such as angular resolution, structural sensitivity, and degree of supervision. We demonstrate the great potential of these new techniques with selected applications to frontline VLBI observations of AGN jets and SMBH. In addition to improving the quality and robustness of image reconstruction, DoG-HiT, DoB-CLEAN and MOEA/D also provide such novel capabilities as dynamic reconstruction of polarimetric images on minute time-scales, or near-real time and unsupervised data analysis (useful in particular for application to large imaging surveys). The techniques and software developed in this dissertation are of interest for a wider range of inverse problems as well. This includes such versatile fields such as Ly-alpha tomography (where we improve estimates of the thermal state of the intergalactic medium), the cosmographic search for dark matter (where we improve forecasted bounds on ultralight dilatons), medical imaging, and solar spectroscopy

    Technologies of information transmission and processing

    Get PDF
    Сборник содержит статьи, тематика которых посвящена научно-теоретическим разработкам в области сетей телекоммуникаций, информационной безопасности, технологий передачи и обработки информации. Предназначен для научных сотрудников в области инфокоммуникаций, преподавателей, аспирантов, магистрантов и студентов технических вузов

    End-to-end numerical modeling of the Roman Space Telescope coronagraph

    Full text link
    The Roman Space Telescope will have the first advanced coronagraph in space, with deformable mirrors for wavefront control, low-order wavefront sensing and maintenance, and a photon-counting detector. It is expected to be able to detect and characterize mature, giant exoplanets in reflected visible light. Over the past decade the performance of the coronagraph in its flight environment has been simulated with increasingly detailed diffraction and structural/thermal finite element modeling. With the instrument now being integrated in preparation for launch within the next few years, the present state of the end-to-end modeling is described, including the measured flight components such as deformable mirrors. The coronagraphic modes are thoroughly described, including characteristics most readily derived from modeling. The methods for diffraction propagation, wavefront control, and structural and thermal finite-element modeling are detailed. The techniques and procedures developed for the instrument will serve as a foundation for future coronagraphic missions such as the Habitable Worlds Observatory.Comment: 113 pages, 85 figures, to be published in SPIE Journal of Astronomical Telescopes, Instruments, and System
    corecore