Centrum Wiskunde & Informatica

CWI's Institutional Repository
Not a member yet
    25953 research outputs found

    Mixed Schur-Weyl duality in quantum information

    Get PDF
    This thesis explores the interplay between representation theory and quantum information. Specifically, we focus on mixed Schur–Weyl duality, which considers the action of the unitary group on mixed tensors. This setting naturally arises in quantum information tasks involving unitary-equivariant channels, such as port-based teleportation, quantum majority vote, and universal transposition of unitary operators. A key contribution of this thesis is an explicit derivation of the action of the generators of the partially transposed permutation matrix algebra—the commutant of the mixed unitary action—in the Gelfand–Tsetlin basis. As another key result of this thesis, we develop efficient quantum circuits for the mixed quantum Schur transform, a novel primitive in quantum information. The key ingredient of our construction is new efficient circuits for the dual Clebsch–Gordan transform of the unitary group. A significant application of our findings is the construction of efficient quantum algorithms for port-based teleportation, a variant of quantum teleportation that eliminates the need for corrective operations. Another application is a symmetry reduction of semidefinite optimisation problems with unitary equivariance symmetry. Finally, we study the extendibility of quantum states possessing unitary, mixed unitary, or orthogonal symmetry on the complete graph. We obtain analytically the exact maximum values for projections onto the maximally entangled state and the antisymmetric state for each of the three symmetry classes. This thesis demonstrates the usefulness of mixed Schur–Weyl duality in quantum information and computing. We expect that our tools will help address other problems in other areas of quantum information processing, such as communication, cryptography, and simulation

    Studying exploration in RL: An optimal transport analysis of occupancy measure trajectories

    No full text
    The rising successes of RL are propelled by combining smart algorithmic strategies and deep architectures to optimize the distribution of returns and visitations over the state-action space. A quantitative framework to compare the learning processes of these eclectic RL algorithms is currently absent but desired in practice. We address this gap by representing the learning process of an RL algorithm as a sequence of policies generated during training, and then studying the policy trajectory induced in the manifold of state-action occupancy measures. Using an optimal transport-based metric, we measure the length of the paths induced by the policy sequence yielded by an RL algorithm between an initial policy and a final optimal policy. Hence, we first define the Effort of Sequential Learning (ESL). ESL quantifies the relative distance that an RL algorithm travels compared to the shortest path from the initial to the optimal policy. Furthermore, we connect the dynamics of policies in the occupancy measure space and regret (another metric to understand the suboptimality of an RL algorithm), by defining the Optimal Movement Ratio (OMR). OMR assesses the fraction of movements in the occupancy measure space that effectively reduce an analogue of regret. Finally, we derive approximation guarantees to estimate ESL and OMR with a finite number of samples and without access to an optimal policy. Through empirical analyses across various environments and algorithms, we demonstrate that ESL and OMR provide insights into the exploration processes of RL algorithms and the hardness of different tasks in discrete and continuous MDPs

    A comprehensive academic and industrial survey of blockchain technology for the energy sector using fuzzy Einstein decision-making

    Get PDF
    The global energy sector is undergoing a significant transformation driven by decarbonization and digitalization, leading to the emergence of Distributed Ledger Technology (DLT) — particularly blockchain — as a promising tool for enhancing transparency, security, and efficiency in modern power systems. This study aims to provide a comprehensive academic and industrial survey of blockchain applications in the energy sector and develop a robust decision-making framework to identify and prioritize the most promising real-world use cases based on multidisciplinary criteria. A three-stage methodology was adopted: (i) a literature and market review encompassing over 300 academic publications and commercial blockchain initiatives in energy, (ii) an in-depth evaluation of the evolution and viability of blockchain initiatives in energy with the help of expert surveys, and (iii) a novel decision-making model using a q-rung orthopair fuzzy Multi-Attributive Border Approximation (q-ROF-MABAC) method under the Einstein operator. The results were compared with existing decision models to validate consistency and robustness. Nine key blockchain use case categories were identified and ranked based on technical, economic, and governance dimensions. The results demonstrated that integrating expert insights into a fuzzy logic framework helps filter out overhyped claims in the literature and prioritize realistic and high-impact applications such as green certificates, grid services, and peer-to-peer energy trading. The model's rankings remained stable across varying weight configurations, confirming the robustness of the methodology. This study provides an evidence-based decision-support tool for researchers, industry stakeholders, and policymakers to better understand, evaluate, and adopt blockchain technologies in the energy sector

    Fault-tolerant structures for measurement-based quantum computation on a network

    Get PDF
    In this work, we introduce a method to construct fault-tolerant measurement-based quantum computation (MBQC) architectures and numerically estimate their performance over various types of networks. A possible application of such a paradigm is distributed quantum computation, where separate computing nodes work together on a fault-tolerant computation through entanglement. We gauge error thresholds of the architectures with an efficient stabilizer simulator to investigate the resilience against both circuit-level and network noise. We show that, for both monolithic (i.e., non-distributed) and distributed implementations, an architecture based on the diamond lattice may outperform the conventional cubic lattice. Moreover, the high erasure thresholds of non-cubic lattices may be exploited further in a distributed context, as their performance may be boosted through entanglement distillation by trading in entanglement success rates against erasure errors during the error-decoding process. These results highlight the significance of lattice geometry in the design of fault-tolerant measurement-based quantum computing on a network, emphasizing the potential for constructing robust and scalable distributed quantum computers

    Infinite-horizon Fuk-Nagaev inequalities

    No full text
    We develop explicit bounds for the tail of the distribution of the all-time supremum of a random walk with negative drift, where the increments have a truncated heavy-tailed distribution. As an application, we consider a ruin problem in the presence of reinsurance

    Detecting wing fractures in chickens using deep learning, photographs and computed tomography scanning

    Get PDF
    Animal welfare monitoring is a key part of veterinary surveillance in every poultry slaughterhouse. Among the animal welfare indicators routinely inspected, the prevalence of wing fractures and soft tissues injuries (e.g. bruises) is particularly relevant, because it is related to acute pain and suffering in injured birds. According to current practice, assessment corresponds to visual examination by animal welfare officers. However, taking into consideration the speed of the production line and limitations associated with human inspection (e.g. different visual perception, subjectivism and fatigue), new more objective and automated techniques are desirable. Therefore, the aim of this study was to assess the applicability of three deep learning classification models to detect fractures and/or bruises based on computed tomography (CT) scans and photographs of the wings. Namely, 1. Model_CT (two categories: 1.BROKEN and 2.NON_BROKEN) detecting fractures based on CT scans, 2.Model_Photo_Fractures (1.FRACTURES and 2.NO_FRACTURES) detecting fractures based on photographs and 3.Model_Photo_Bruises (1.BRUISES and 2.NO_BRUISES) detecting bruises based on photographs. To train, validate and test these models 306 CT scans and 285 photographs were collected. The 3D ResNet34 and 2D EfficientNetV2_s architectures were used for the CT and Photo_Models, respectively. The models reached an accuracy of 98 % (Model_CT), 96 % (Model_Photo_Fractures) and 82 % (Model_Photo_Bruises). All in all, applying deep learning to the combination of CT scanning and photography can help to objectively recognize wing fractures and bruises. Consequently, it might lead to more accurate and objective animal welfare monitoring and ultimately to raised animal welfare standards

    Hypercontractivity for quantum erasure channels via variable multipartite log-Sobolev inequality

    No full text
    We prove an almost optimal hypercontractive inequality for products of quantum erasure channels, generalizing the hypercontractivity for classical binary erasure channels. To our knowledge, this is the first tensorization-type hypercontractivity bound for quantum channels with no fixed states. The traditional inductive arguments for classical hypercontractivity cannot be generalized to the quantum setting due to the nature of the non-commutativity of matrices. To overcome the difficulty, we establish a novel quantum log-Sobolev inequality for Bernoulli entropy, which includes the classical log-Sobolev inequality and the quantum log-Sobolev inequality as one-partite cases. To our knowledge, its classical counterpart is also unknown prior to this work. We establish a connection between our quantum log-Sobolev inequality and the hypercontractivity bound for quantum erasure channels via a refined quantum Gross’ lemma, extending the analogous connection between the quantum log- Sobolev inequality and the hypercontractivity for qubit unital channels. As an application, we prove an almost tight bound (up to a constant factor) on the classical communication complexity of two-party common randomness generation assisted with erasednoisy EPR states, generalizing the tight bound on the same task assisted with erased-noisy random strings due to Guruswami and Radhakrishnan

    On testing and learning quantum junta channels

    No full text
    We consider the problems of testing and learning quantum k-junta channels, which are n-qubit to n-qubit quantum channels acting non-trivially on at most k out of n qubits and leaving the rest of qubits unchanged. We show the following. 1) An O(k)-query algorithm to distinguish whether the given channel is k-junta channel or is far from any k-junta channels, and a lower bound Ω√(k) on the number of queries and 2) An O(4k)-query algorithm to learn a k-junta channel, and a lower bound Ω(4k/k) on the number of queries. This partially answers an open problem raised by (Chen et al. 2023). In order to settle these problems, we develop a Fourier analysis framework over the space of superoperators and prove several fundamental properties, which extends the Fourier analysis over the space of operators introduced in (Montanaro and Osborne, 2010). The distance metric we consider in this paper is obtained by Fourier analysis, which is essentially the L2-distance between Choi representations. Besides, we introduce Influence-Sample to replace Fourier-Sample proposed in(Atici and Servedio, 2007). Our Influence-Sample includes only single-qubit operations and results in only constant-factor decrease in efficiency

    Adaptive quantum computers : decoding and state preparation

    Get PDF

    GALÆXI: Solving complex compressible flows with high-order discontinuous Galerkin methods on accelerator-based systems

    Get PDF
    This work presents GALÆXI as a novel, energy-efficient flow solver for the simulation of compressible flows on unstructured hexahedral meshes leveraging the parallel computing power of modern Graphics Processing Units (GPUs). GALÆXI implements the high-order Discontinuous Galerkin Spectral Element Method (DGSEM) using shock capturing with a finite-volume subcell approach to ensure the stability of the high-order scheme near shocks. This work provides details on the general code design, the parallelization strategy, and the implementation approach for the compute kernels with a focus on the element local mappings between volume and surface data due to the unstructured mesh. The scheme is implemented using a pure distributed memory parallelization based on a domain decomposition, where each GPU handles a distinct region of the computational domain. On each GPU, the computations are assigned to different compute streams which allows to antedate the computation of quantities required for communication while performing local computations from other streams to hide the communication latency. This parallelization strategy allows for maximizing the use of available computational resources. This results in excellent strong scaling properties of GALÆXI up to 1024 GPUs if each GPU is assigned a minimum of one million degrees of freedom. To verify its implementation, a convergence study is performed that recovers the theoretical order of convergence of the implemented numerical schemes. Moreover, the solver is validated using both the incompressible and compressible formulation of the Taylor–Green-Vortex at a Mach number of 0.1 and 1.25, respectively. A mesh convergence study shows that the results converge to the high-fidelity reference solution and that the results match the original CPU implementation. Finally, GALÆXI is applied to a large-scale wall-resolved large eddy simulation of a linear cascade of the NASA Rotor 37. Here, the supersonic region and shocks at the leading edge are captured accurately and robustly by the implemented shock-capturing approach. It is demonstrated that GALÆXI requires less than half of the energy to carry out this simulation in comparison to the reference CPU implementation. This renders GALÆXI as a potent tool for accurate and efficient simulations of compressible flows in the realm of exascale computing and the associated new HPC architectures

    13,448

    full texts

    25,976

    metadata records
    Updated in last 30 days.
    CWI's Institutional Repository is based in Netherlands
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇