646 research outputs found

    Multiresolution Approximation of a Bayesian Inverse Problem using Second-Generation Wavelets

    Full text link
    Bayesian approaches are one of the primary methodologies to tackle an inverse problem in high dimensions. Such an inverse problem arises in hydrology to infer the permeability field given flow data in a porous media. It is common practice to decompose the unknown field into some basis and infer the decomposition parameters instead of directly inferring the unknown. Given the multiscale nature of permeability fields, wavelets are a natural choice for parameterizing them. This study uses a Bayesian approach to incorporate the statistical sparsity that characterizes discrete wavelet coefficients. First, we impose a prior distribution incorporating the hierarchical structure of the wavelet coefficient and smoothness of reconstruction via scale-dependent hyperparameters. Then, Sequential Monte Carlo (SMC) method adaptively explores the posterior density on different scales, followed by model selection based on Bayes Factors. Finally, the permeability field is reconstructed from the coefficients using a multiresolution approach based on second-generation wavelets. Here, observations from the pressure sensor grid network are computed via Multilevel Adaptive Wavelet Collocation Method (AWCM). Results highlight the importance of prior modeling on parameter estimation in the inverse problem

    Mesoscopic Physics of Quantum Systems and Neural Networks

    Get PDF
    We study three different kinds of mesoscopic systems – in the intermediate region between macroscopic and microscopic scales consisting of many interacting constituents: We consider particle entanglement in one-dimensional chains of interacting fermions. By employing a field theoretical bosonization calculation, we obtain the one-particle entanglement entropy in the ground state and its time evolution after an interaction quantum quench which causes relaxation towards non-equilibrium steady states. By pushing the boundaries of the numerical exact diagonalization and density matrix renormalization group computations, we are able to accurately scale to the thermodynamic limit where we make contact to the analytic field theory model. This allows to fix an interaction cutoff required in the continuum bosonization calculation to account for the short range interaction of the lattice model, such that the bosonization result provides accurate predictions for the one-body reduced density matrix in the Luttinger liquid phase. Establishing a better understanding of how to control entanglement in mesoscopic systems is also crucial for building qubits for a quantum computer. We further study a popular scalable qubit architecture that is based on Majorana zero modes in topological superconductors. The two major challenges with realizing Majorana qubits currently lie in trivial pseudo-Majorana states that mimic signatures of the topological bound states and in strong disorder in the proposed topological hybrid systems that destroys the topological phase. We study coherent transport through interferometers with a Majorana wire embedded into one arm. By combining analytical and numerical considerations, we explain the occurrence of an amplitude maximum as a function of the Zeeman field at the onset of the topological phase – a signature unique to MZMs – which has recently been measured experimentally [Whiticar et al., Nature Communications, 11(1):3212, 2020]. By placing an array of gates in proximity to the nanowire, we made a fruitful connection to the field of Machine Learning by using the CMA-ES algorithm to tune the gate voltages in order to maximize the amplitude of coherent transmission. We find that the algorithm is capable of learning disorder profiles and even to restore Majorana modes that were fully destroyed by strong disorder by optimizing a feasible number of gates. Deep neural networks are another popular machine learning approach which not only has many direct applications to physical systems but which also behaves similarly to physical mesoscopic systems. In order to comprehend the effects of the complex dynamics from the training, we employ Random Matrix Theory (RMT) as a zero-information hypothesis: before training, the weights are randomly initialized and therefore are perfectly described by RMT. After training, we attribute deviations from these predictions to learned information in the weight matrices. Conducting a careful numerical analysis, we verify that the spectra of weight matrices consists of a random bulk and a few important large singular values and corresponding vectors that carry almost all learned information. By further adding label noise to the training data, we find that more singular values in intermediate parts of the spectrum contribute by fitting the randomly labeled images. Based on these observations, we propose a noise filtering algorithm that both removes the singular values storing the noise and reverts the level repulsion of the large singular values due to the random bulk

    The Effect of Auditory Stimuli on the Quantitative Electroencephalogram in Patients with Parkinson's Disease

    Get PDF
    Parkinson’s Disease (PD) is the second most common neurodegenerative disorder worldwide with increasing incidence and prevalence. It mainly affects the motor system due to a loss of dopaminergic neurons in the substantia nigra and leads to cardinal symptoms including brady-/akinesia, tremor, muscle stiffness and postural instability. After clinical diagnosis, treatment is primarily based on L- dopa, dopamine agonists and MAO-B inhibitors. Even with therapy, PD continues to progress and remains uncurable. In recent years, music therapy has been established as a complementary therapy due to a variety of positive effects, mainly on the motor system. However, it is still insufficiently explained what exactly renders music therapy so effective. Possible explanations range from an increased dopamine release to a better functional connectivity of different brain areas. The aim of this methodologically innovative study was to find underlying mechanisms for the effectiveness of music therapy based on EEG analysis. The analysis of the EEG was chosen due to its good temporal resolution, fast availability and relatively low costs. The research questions were first, whether it is generally possible to distinguish patients with PD from Healthy Controls (HC) based on their EEG. Second, whether auditory stimuli show an effect on the EEG. Third, which features precisely a differentiation of both groups in the EEG is based on. And fourth, which characteristics render an auditory stimulus effective. The study was conducted in collaboration between the University of British Columbia (UBC) in Vancouver and the Philipps-Universität Marburg. In 2017 and 2018, 12 patients with PD and 4 age- matched HC were tested at the UBC campus. A total of 5 EEGs (conditions) were recorded from each subject at rest and under auditory stimulation. The three stimuli differed in complexity (Rain vs Spring Walk) and modulation (rhythmic and non-rhythmic). For a more precise interpretation of the results, natural sounds were used as stimuli instead of music. Due to the amount of data, a custom-made pattern recognition algorithm (Support Vector Machine) was used, distinguishing both groups through a hyperplane within a high-dimensional feature space. Redundant data was removed in advance by calculating the mutual information quotient to include only relevant data in the final analysis. It could be shown that, first, the differentiation of both groups on the basis of the EEG is generally possible, in this case even with a convincing classification accuracy of up to 90 %. Second, the auditory stimuli mainly had an effect on the EEG samples of HC and made the classification more complex: the EEG samples of the HC approached those of the PD patients within the feature space, rendering a common hyperplane for all conditions ineffective. Based on shared features but with a separate hyperplane in each condition classification accuracy of 80-90 % and thus very good discrimination of both groups could be achieved again even under the influence of auditory stimuli. 68 Third, the by far most important features to distinguish both groups were related to the delta frequency band (0.5-4 Hz) including band power, indices of the delta band, and harmonic parameters. The increased importance of delta in PD matches existing literature, most likely due to cognitive decline. This study enhances existing literature on delta by the harmonic parameters, mainly the center frequency and the spectral value thereof. In addition, the delta frequency band is often linked to relaxation and sleep. Thus, the convergence of the EEG samples is most likely explained by stimulus- induced relaxation. Another important feature seems to be the phase lag index. It is also mentioned in the literature as an indicator of mild cognitive impairment and decreases under the influence of the stimuli. A link between the PLI and functional connectivity, as mentioned in the literature, cannot be shown in this study. Fourth, the convergence of the HC samples towards the PD samples was particularly evident in the rain conditions with misclassifications of up to 80 %. This was the case in both the rhythmic and non- rhythmic variants. Given the importance of rhythm as often shown in literature on music therapy it appears that the intended modulation was not perceived as rhythmic by the subjects. The convergence of samples was less evident in the spring walk condition, where higher frequency bands were relevant too. Auditory stimuli thus seem to need a basic complexity to show an effect on the EEG. Approaches to further research arise. For example, if the delta band is expected to be important, greater epoch lengths than in this study (3 seconds) could be analyzed to avoid false interpretations due to epochs being too short to capture very slow oscillations. In addition, a general slowing of the EEG is probably not specific for PD. For a more specific analysis, the inclusion of participants with mild cognitive impairment (PD-MCI as well as MCI not caused by PD) would be useful. Testing more complex stimuli such as music, an inclusion of motor functions in the analysis or even a measurement of dopamine levels would also remain of interest. Looking at the study design, a more balanced patient population might be beneficial. In order to show an effect of music therapy in the EEG, a convergence of PD samples towards HC samples would have been desirable. Due to the relaxation, the opposite was the case. The chosen methodology, however, seems very appropriate. The classification of both groups was possible on a convincingly high level and recommends this approach for further research, due to its variability beyond neurology and even medicine

    Advanced VLBI Imaging

    Get PDF
    Very Long Baseline Interferometry (VLBI) is an observational technique developed in astronomy for combining multiple radio telescopes into a single virtual instrument with an effective aperture reaching up to many thousand kilometers and enabling measurements at highest angular resolutions. The celebrated examples of applying VLBI to astrophysical studies include detailed, high-resolution images of the innermost parts of relativistic outflows (jets) in active galactic nuclei (AGN) and recent pioneering observations of the shadows of supermassive black holes (SMBH) in the center of our Galaxy and in the galaxy M87. Despite these and many other proven successes of VLBI, analysis and imaging of VLBI data still remain difficult, owing in part to the fact that VLBI imaging inherently constitutes an ill-posed inverse problem. Historically, this problem has been addressed in radio interferometry by the CLEAN algorithm, a matching-pursuit inverse modeling method developed in the early 1970-s and since then established as a de-facto standard approach for imaging VLBI data. In recent years, the constantly increasing demand for improving quality and fidelity of interferometric image reconstruction has resulted in several attempts to employ new approaches, such as forward modeling and Bayesian estimation, for application to VLBI imaging. While the current state-of-the-art forward modeling and Bayesian techniques may outperform CLEAN in terms of accuracy, resolution, robustness, and adaptability, they also tend to require more complex structure and longer computation times, and rely on extensive finetuning of a larger number of non-trivial hyperparameters. This leaves an ample room for further searches for potentially more effective imaging approaches and provides the main motivation for this dissertation and its particular focusing on the need to unify algorithmic frameworks and to study VLBI imaging from the perspective of inverse problems in general. In pursuit of this goal, and based on an extensive qualitative comparison of the existing methods, this dissertation comprises the development, testing, and first implementations of two novel concepts for improved interferometric image reconstruction. The concepts combine the known benefits of current forward modeling techniques, develop more automatic and less supervised algorithms for image reconstruction, and realize them within two different frameworks. The first framework unites multiscale imaging algorithms in the spirit of compressive sensing with a dictionary adapted to the uv-coverage and its defects (DoG-HiT, DoB-CLEAN). We extend this approach to dynamical imaging and polarimetric imaging. The core components of this framework are realized in a multidisciplinary and multipurpose software MrBeam, developed as part of this dissertation. The second framework employs a multiobjective genetic evolutionary algorithm (MOEA/D) for the purpose of achieving fully unsupervised image reconstruction and hyperparameter optimization. These new methods are shown to outperform the existing methods in various metrics such as angular resolution, structural sensitivity, and degree of supervision. We demonstrate the great potential of these new techniques with selected applications to frontline VLBI observations of AGN jets and SMBH. In addition to improving the quality and robustness of image reconstruction, DoG-HiT, DoB-CLEAN and MOEA/D also provide such novel capabilities as dynamic reconstruction of polarimetric images on minute time-scales, or near-real time and unsupervised data analysis (useful in particular for application to large imaging surveys). The techniques and software developed in this dissertation are of interest for a wider range of inverse problems as well. This includes such versatile fields such as Ly-alpha tomography (where we improve estimates of the thermal state of the intergalactic medium), the cosmographic search for dark matter (where we improve forecasted bounds on ultralight dilatons), medical imaging, and solar spectroscopy

    Technologies of information transmission and processing

    Get PDF
    Сборник содержит статьи, тематика которых посвящена научно-теоретическим разработкам в области сетей телекоммуникаций, информационной безопасности, технологий передачи и обработки информации. Предназначен для научных сотрудников в области инфокоммуникаций, преподавателей, аспирантов, магистрантов и студентов технических вузов

    A Perceptually Optimized and Self-Calibrated Tone Mapping Operator

    Full text link
    With the increasing popularity and accessibility of high dynamic range (HDR) photography, tone mapping operators (TMOs) for dynamic range compression are practically demanding. In this paper, we develop a two-stage neural network-based TMO that is self-calibrated and perceptually optimized. In Stage one, motivated by the physiology of the early stages of the human visual system, we first decompose an HDR image into a normalized Laplacian pyramid. We then use two lightweight deep neural networks (DNNs), taking the normalized representation as input and estimating the Laplacian pyramid of the corresponding LDR image. We optimize the tone mapping network by minimizing the normalized Laplacian pyramid distance (NLPD), a perceptual metric aligning with human judgments of tone-mapped image quality. In Stage two, the input HDR image is self-calibrated to compute the final LDR image. We feed the same HDR image but rescaled with different maximum luminances to the learned tone mapping network, and generate a pseudo-multi-exposure image stack with different detail visibility and color saturation. We then train another lightweight DNN to fuse the LDR image stack into a desired LDR image by maximizing a variant of the structural similarity index for multi-exposure image fusion (MEF-SSIM), which has been proven perceptually relevant to fused image quality. The proposed self-calibration mechanism through MEF enables our TMO to accept uncalibrated HDR images, while being physiology-driven. Extensive experiments show that our method produces images with consistently better visual quality. Additionally, since our method builds upon three lightweight DNNs, it is among the fastest local TMOs.Comment: 20 pages,18 figure

    PAPR Reduction Solutions for 5G and Beyond

    Get PDF
    The latest fifth generation (5G) wireless technology provides improved communication quality compared to earlier generations. The 5G New Radio (NR), specified by the 3rd Generation Partnership Project (3GPP), addresses the modern requirements of the wireless networks and targets improved communication quality in terms of for example peak data rates, latency and reliability. On the other hand, there are still various crucial issues that impact the implementation and energy-efficiency of 5G NR networks and their different deployments. The power-efficiency of transmitter power amplifiers (PAs) is one of these issues. The PA is an important unit of a communication system, which is responsible from amplifying the transmit signal towards the antenna. Reaching high PA power-efficiency is known to be difficult when the transmit waveform has a high peak-to-average power ratio (PAPR). The cyclic prefix (CP)-orthogonal frequencydivision multiplexing (OFDM) that is the main physical-layer waveform of 5G NR, suffers from such high PAPR challenge. There are generally many PAPR reduction methods proposed in the literature, however, many of these have either very notable computational complexity or impose substantial inband distortion. Moreover, 5G NR has new features that require redesigning the PAPR reduction methods. In line with these, the first contribution of this thesis is the novel frequencyselective PAPR reduction concept, where clipping noise is shaped in a frequencyselective manner over the active passband. This concept is in line with the 5G NR, where aggressive frequency-domain multiplexing is considered as an important feature. Utilizing the frequency-selective PAPR reduction enables the realization of the heterogeneous resource utilization within one passband. The second contribution of this thesis is the frequency-selective single-numerology (SN) and mixed-numerology (MN) PAPR reduction methods. The 5G NR targets utilizing different physical resource blocks (PRBs) and bandwidth parts (BWPs) within one passband flexibly. Yet, existing PAPR reduction methods do not exploit these features. Based on this, novel algorithms utilizing PRB and BWP level control of clipping noise are designed to meet error vector magnitude (EVM) limits of the modulations while reducing the PAPR. TheMNallocation has one critical challenge as inter numerology interference (INI) emerges after aggregation of subband signals. Proposed MN PAPR reduction algorithm overcomes this issue by cancelling INI within the PAPR reduction loop, which has not been considered earlier. The third contribution of this thesis is the proposal of two novel non-iterative PAPR reduction methods. First method utilizes the fast-convolution filteredOFDM (FC-F-OFDM) that has excellent spectral containment, and combines it with clipping. Moreover, clipping noise is also allocated to guard bands by filter passband extension (FPE) and clipping noise in out-of-band (OOB) regions is essentially filtered through FC filtering. The second method is the guard-tone reservation (GTR) which is applied to discrete Fourier transform-spread-OFDM (DFT-s-OFDM). Uniquely, GTR estimates the time domain peaks in data symbol domain before inverse fast Fourier transform (IFFT), and uses guard band tones for PAPR reduction. The fourth contribution of the thesis is the design of two novel machine learning (ML) algorithms that improve the drawbacks of frequency-selective PAPRreduction. The first ML algorithm, PAPRer, models the nonlinear relation between the PAPR target and the realized PAPR value. Then, it auto-tunes the optimal PAPR target and this way minimizes the realized PAPR. The second ML algorithm, one-shot clipping-and-filtering (OSCF), solves the complexity problem of iterative clipping and filtering (ICF)-like methods by generating proper approximated clipping noise signal after running only one iteration, leading to very efficient PAPR reduction. Finally, an over-arching contribution of this thesis is the experimental validation of the performance benefits of the proposed methods by considering realistic 5GNR uplink (UL) and downlink (DL) testbeds that include realistic PAs and associated hardware. It is very important to confirm the practical benefits of the proposed methods and, this is realized with the conducted experimental work

    The Importance of Anti-Aliasing in Tiny Object Detection

    Full text link
    Tiny object detection has gained considerable attention in the research community owing to the frequent occurrence of tiny objects in numerous critical real-world scenarios. However, convolutional neural networks (CNNs) used as the backbone for object detection architectures typically neglect Nyquist's sampling theorem during down-sampling operations, resulting in aliasing and degraded performance. This is likely to be a particular issue for tiny objects that occupy very few pixels and therefore have high spatial frequency features. This paper applied an existing approach WaveCNet for anti-aliasing to tiny object detection. WaveCNet addresses aliasing by replacing standard down-sampling processes in CNNs with Wavelet Pooling (WaveletPool) layers, effectively suppressing aliasing. We modify the original WaveCNet to apply WaveletPool in a consistent way in both pathways of the residual blocks in ResNets. Additionally, we also propose a bottom-heavy version of the backbone, which further improves the performance of tiny object detection while also reducing the required number of parameters by almost half. Experimental results on the TinyPerson, WiderFace, and DOTA datasets demonstrate the importance of anti-aliasing in tiny object detection and the effectiveness of the proposed method which achieves new state-of-the-art results on all three datasets. Codes and experiment results are released at https://github.com/freshn/Anti-aliasing-Tiny-Object-Detection.git
    corecore