25953 research outputs found
Sort by
Predictive processing in neuroscience, computational modeling and psychology
Over the past decades, predictive processing has emerged as a powerful theoretical framework that holds promise for explaining a wide range of phenomena, including perception and imagery but also sensorimotor control and consciousness. Here we focus on the question if and how predictive processing may be implemented in the mammalian and human brain, and what its scope of perceptual and cognitive functions is. We review basic and advanced computational models of predictive processing, expanding the range of computational, cognitive and sensorimotor capacities and enhancing their biological plausibility. Based on empirical evidence, major steps will need to be taken to flesh out how predictive processing may be precisely implemented in the brain, but the overall framework holds great potential as an explanatory framework in the neurosciences and psychology, strongly linking to Artificial Intelligence
Strategies for adiabatic state preparation of quantum many-body systems
Quantum computers represent a relatively new and promising development in computing technology. One of the applications of quantum computers is modelling quantum many-body systems, which describe interactions between particles on atomic and subatomic scales. Such systems are highly complex and challenging to model with classical computers due to the vast number of possible states and entanglement of the particles. In this dissertation, we examine the extent to which quantum computers can accelerate the modelling of quantum many-body systems compared to classical computers. Specifically, this dissertation presents research on adiabatic state preparation: a quantum algorithmic technique that uses the adiabatic principle from quantum mechanics to approximate eigenstates. We describe three new techniques that fall within this category and, in certain cases, offer advantages over standard methods.
First, we consider cases of ground state preparation for fermionic many-body systems, where standard direct interpolation between the initial and final hamiltonian is hindered by level crossings due to discrete symmetries. As an alternative to direct interpolation, we propose adiabatic paths in a higher-dimensional space, which break the relevant symmetries.
Next, we present an adiabatic echo verification protocol which mitigates both coherent and incoherent errors, arising from non-adiabatic transitions and hardware noise, respectively. We show that the estimator bias of the observable is reduced when compared to standard adiabatic preparation, achieving up to a quadratic improvement.
Finally, we propose a general, fully gate-based and nonvariational quantum algorithm for counterdiabatic driving. We provide a rigorous quantum gate complexity upper bound in terms of the minimum gap around this eigenstate. We find that, in the worst case, the algorithm can be run with at most quantum gates such that a target state fidelity of at least is achieved. In certain cases, the gap dependence can be improved to quadratic
Deep learning for landmark detection, segmentation, and multi-objective deformable registration in medical imaging
Cervical cancer affects about half a million women globally every year. The treatment of cervical cancer with the aim of healing mainly consists of surgery, radiation treatment, or a combination of radiation treatment with chemotherapy or hyperthermia. Radiation treatment is a type of treatment wherein a high dose of ionizing radiation is used to kill the tumor cells. The radiation dose is usually delivered in the form of External Beam Radiation Treatment (EBRT) with a linear accelerator followed by internal radiation treatment (brachytherapy) during which a small radioactive source is passed through an applicator and needles that are placed temporarily nearby the cervix. EBRT typically spans several weeks with daily sessions (often referred to as fractions), whereas brachytherapy typically consists of three or four fractions based on one to three implantations. The aim of the radiation treatment is to provide effective radiation to kill the tumor cells while sparing the nearby healthy tissue or Organs At Risk (OARs) as much as possible. This is achieved by treatment planning following the contouring of target volumes and OARs, on medical imaging scans, which typically are Computed Tomography (CT) and/orMagnetic Resonance Imaging (MRI)
Improved classical and quantum algorithms for the shortest vector problem via bounded distance decoding
The most important computational problem on lattices is the shortest vector problem (SVP). In this paper, we present new algorithms that improve the state-of-the-art for provable classical/quantum algorithms for SVP. We present the following results: (1) A new algorithm for SVP that provides a smooth tradeoff between time complexity and memory requirement. For any positive integer 4 ≤ q ≤ √n, our algorithm takes q13n+o(n) time and requires poly(n) ̇ q16n/q2 memory. This tradeoff, which ranges from enumeration (q = √n) to sieving (q constant), is a consequence of a new time-memory tradeoff for discrete Gaussian sampling above the smoothing parameter. (2) A quantum algorithm for SVP that runs in time 20.950n+o(n) and requires 20.5n+o(n) classical memory and poly(n) qubits. In a quantum random access memory (QRAM) model, this algorithm takes only 20.835n+o(n) time and requires a QRAM of size 20.293n+o(n), poly(n) qubits and 20.5n classical space. This improves over the previously fastest classical (which is also the fastest quantum) algorithm due to [D. Aggarwal et al., Solving the shortest vector problem in 2n time using discrete Gaussian sampling: Extended abstract, in Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing (STOC), 2015, pp. 733-742] that has a time and space complexity 2n+o(n). (3) A classical algorithm for SVP that runs in time 21.669n+o(n) time and 20.5n+o(n) space. This improves over an algorithm of [Y. Chen, K. Chung, and C. Lai, Quantum Inf. Comput., 18 (2018), pp. 285-306] that has the same space complexity. The time complexity of our classical and quantum algorithms are obtained using a known upper bound on a quantity related to the lattice kissing number, which is 20.402n. We conjecture that for most lattices this quantity is a 2o(n). Assuming that this is the case, our classical algorithm runs in time 21.292n+o(n), our quantum algorithm runs in time 20.750n+o(n), and our quantum algorithm in a QRAM model runs in time 20.667n+o(n). As a direct application of our result, using the reduction in [L. Ducas, Des. Codes. Cryptogr., 92 (2024), pp. 909-916], we obtain a provable quantum algorithm for the lattice isomorphism problem in the case of the trivial lattice \BbbZn (\BbbZLIP) that runs in time 20.417n+o(n). Our algorithm requires a QRAM of size 20.147n+o(n), poly(n) qubits and 20.25n classical space
An effective aggregation heuristic for Capacitated Facility Location Problems with many demand points
In location analysis, the effects of demand aggregation have been the subject of many studies. This body of literature is mainly focused on p-median and p-center problems. Relatively few papers in the literature on aggregation explicitly concern the Capacitated Facility Location Problem (CFLP). Our work examines the beneficial use of aggregation in the context of the CFLP. We focus on problems where there are significantly more demand points than potential facility locations, since this is where aggregation is most applicable in reducing complexity. We examine ways to obtain an aggregation at a fixed resolution, that is likely to perform well for a given instance of the problem. These aggregation techniques will form the core of a broader algorithmic framework, which contributes to the literature concerning heuristics for CFLPs. Our core aggregation method is based on applying k-means clustering in Rm, where m is the number of potential facilities. The space in which we apply the clustering is constructed by applying a transformation to the normalized distance matrix corresponding to the original CFLP problem. The aim of applying the transformation is to magnify differences in distance where relevant, and to compress irrelevant differences in distance. We evaluate our heuristic method on larger instances based on a real-world problem in reverse logistics. The results are encouraging and indicate that our method is capable of outperforming an intuitive benchmark aggregation method. We find that choosing the right hyperparameters and starting with a good initialization help our method perform better
Security analysis of covercrypt: A quantum-safe hybrid key encapsulation mechanism for hidden access policies
The ETSI Technical Specification 104 015 proposes a framework to build Key Encapsu-
lation Mechanisms (KEMs) with access policies and attributes, in the Ciphertext-Policy Attribute-
Based Encryption (CP-ABE) vein. Several security guarantees and functionalities are claimed, such
as pre-quantum and post-quantum hybridization to achieve security against Chosen-Ciphertext At-
tacks (CCA), anonymity, and traceability.
In this paper, we present a formal security analysis of a more generic construction, with application
to the specific Covercrypt scheme, based on the pre-quantum ECDH and the post-quantum ML-
KEM KEMs. We additionally provide an open-source library that implements the ETSI standard,
in Rust, with high effiency
Scintillator decorrelation for self-supervised x-ray radiograph denoising
X-ray radiographs from industrial, medical, and laboratory x-ray equipment can degrade severely due to fast and/or low-dose acquisition, x-ray scatter, and electronic noise from the detector instrument. As a consequence, noise and artifacts propagate into computed tomography (CT) images. Recently, a new class of self-supervised deep learning methods, with Noise2Self and Noise2Void, demonstrated state-of-the-art denoising results on data sets of pixelwise statistically-independent noisy images. These methods, called blind-spot networks (BSNs), are promising for applications where clean training examples or pairs of noisy examples are unavailable. For x-ray imaging, however, the detection principle of x-ray scintillators leads to a spatially-correlated mix of Poisson and Gaussian noise, rendering BSNs ineffective. In this article, we propose and validate a denoising workflow that reverts the correlations by a direct deconvolution with an estimate of the scintillator point-response function . We show that it can restore the denoising performance of Noise2Self, and demonstrate it for dynamic sparse-view CT reconstruction of single-bubble gas-solids fluidized beds using a data set of unpaired noisy radiographs from cesium-iodine scintillator flat-panel detectors
Solving equations using Khovanskii bases
We develop a new eigenvalue method for solving structured polynomial equations over any field. The equations are defined on a projective algebraic variety which admits a rational parameterization by a Khovanskii basis, e.g., a Grassmannian in its Plücker embedding. This generalizes established algorithms for toric varieties, and introduces the effective use of Khovanskii bases in computer algebra. We investigate regularity questions and discuss several applications
The Queue Automaton Revisited
We consider the computational model of the Queue Automaton. An old result is that the deterministic queue automaton is equally expressive as the Turing machine. We introduced the Reactive Turing Machine, enhancing the Turing machine with a notion of interaction. The Reactive Turing Machine defines all executable processes. In this paper, we prove that the non-deterministic queue automaton is equally expressive as the Reactive Turing Machine. Together with finite automata, pushdown automata and parallel pushdown automata, queue automata form a nice hierarchy of executable processes, with stacks, bags and queues as central elements