28,286 research outputs found

    Optimal universal quantum circuits for unitary complex conjugation

    Full text link
    Let UdU_d be a unitary operator representing an arbitrary dd-dimensional unitary quantum operation. This work presents optimal quantum circuits for transforming a number kk of calls of UdU_d into its complex conjugate Udˉ\bar{U_d}. Our circuits admit a parallel implementation and are proven to be optimal for any kk and dd with an average fidelity of ⟨F⟩=k+1d(d−k)\left\langle{F}\right\rangle =\frac{k+1}{d(d-k)}. Optimality is shown for average fidelity, robustness to noise, and other standard figures of merit. This extends previous works which considered the scenario of a single call (k=1k=1) of the operation UdU_d, and the special case of k=d−1k=d-1 calls. We then show that our results encompass optimal transformations from kk calls of UdU_d to f(Ud)f(U_d) for any arbitrary homomorphism ff from the group of dd-dimensional unitary operators to itself, since complex conjugation is the only non-trivial automorphisms on the group of unitary operators. Finally, we apply our optimal complex conjugation implementation to design a probabilistic circuit for reversing arbitrary quantum evolutions.Comment: 19 pages, 5 figures. Improved presentation, typos corrected, and some proofs are now clearer. Closer to the published versio

    A family of total Lagrangian Petrov-Galerkin Cosserat rod finite element formulations

    Full text link
    The standard in rod finite element formulations is the Bubnov-Galerkin projection method, where the test functions arise from a consistent variation of the ansatz functions. This approach becomes increasingly complex when highly nonlinear ansatz functions are chosen to approximate the rod's centerline and cross-section orientations. Using a Petrov-Galerkin projection method, we propose a whole family of rod finite element formulations where the nodal generalized virtual displacements and generalized velocities are interpolated instead of using the consistent variations and time derivatives of the ansatz functions. This approach leads to a significant simplification of the expressions in the discrete virtual work functionals. In addition, independent strategies can be chosen for interpolating the nodal centerline points and cross-section orientations. We discuss three objective interpolation strategies and give an in-depth analysis concerning locking and convergence behavior for the whole family of rod finite element formulations.Comment: arXiv admin note: text overlap with arXiv:2301.0559

    CoRe-Sleep: A Multimodal Fusion Framework for Time Series Robust to Imperfect Modalities

    Full text link
    Sleep abnormalities can have severe health consequences. Automated sleep staging, i.e. labelling the sequence of sleep stages from the patient's physiological recordings, could simplify the diagnostic process. Previous work on automated sleep staging has achieved great results, mainly relying on the EEG signal. However, often multiple sources of information are available beyond EEG. This can be particularly beneficial when the EEG recordings are noisy or even missing completely. In this paper, we propose CoRe-Sleep, a Coordinated Representation multimodal fusion network that is particularly focused on improving the robustness of signal analysis on imperfect data. We demonstrate how appropriately handling multimodal information can be the key to achieving such robustness. CoRe-Sleep tolerates noisy or missing modalities segments, allowing training on incomplete data. Additionally, it shows state-of-the-art performance when testing on both multimodal and unimodal data using a single model on SHHS-1, the largest publicly available study that includes sleep stage labels. The results indicate that training the model on multimodal data does positively influence performance when tested on unimodal data. This work aims at bridging the gap between automated analysis tools and their clinical utility.Comment: 10 pages, 4 figures, 2 tables, journa

    TransFusionOdom: Interpretable Transformer-based LiDAR-Inertial Fusion Odometry Estimation

    Full text link
    Multi-modal fusion of sensors is a commonly used approach to enhance the performance of odometry estimation, which is also a fundamental module for mobile robots. However, the question of \textit{how to perform fusion among different modalities in a supervised sensor fusion odometry estimation task?} is still one of challenging issues remains. Some simple operations, such as element-wise summation and concatenation, are not capable of assigning adaptive attentional weights to incorporate different modalities efficiently, which make it difficult to achieve competitive odometry results. Recently, the Transformer architecture has shown potential for multi-modal fusion tasks, particularly in the domains of vision with language. In this work, we propose an end-to-end supervised Transformer-based LiDAR-Inertial fusion framework (namely TransFusionOdom) for odometry estimation. The multi-attention fusion module demonstrates different fusion approaches for homogeneous and heterogeneous modalities to address the overfitting problem that can arise from blindly increasing the complexity of the model. Additionally, to interpret the learning process of the Transformer-based multi-modal interactions, a general visualization approach is introduced to illustrate the interactions between modalities. Moreover, exhaustive ablation studies evaluate different multi-modal fusion strategies to verify the performance of the proposed fusion strategy. A synthetic multi-modal dataset is made public to validate the generalization ability of the proposed fusion strategy, which also works for other combinations of different modalities. The quantitative and qualitative odometry evaluations on the KITTI dataset verify the proposed TransFusionOdom could achieve superior performance compared with other related works.Comment: Submitted to IEEE Sensors Journal with some modifications. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Mechanical behaviour of rubber blocks

    Get PDF
    This study investigates the behaviour of rubber blocks bonded between two plates under combined compression and shear loading, using experimental and numerical analyses, and also approximate analytical theories. First, experimental data from a series of compression and shear tests of rubber blocks with different aspect ratios are presented. Next, numerical simulations are carried out with three-dimensional finite element (FE) models, allowing insight to be gained into the stress and strain fields within the blocks. Existing analytical theories for blocks under compression and combined compressive and shear loading are then reviewed, and their accuracy is evaluated against test and numerical results. The study shows that those theories accounting for the effect of the axial shortening of the blocks provide a better description of the combined compression and shear behaviour, compared to theories, developed for laminated structural bearings with many thin rubber layers, that ignore this effect. An improved theory is also proposed, which better describes the effects of the bulging of the compressed blocks on their shear and flexural parameters and provides a better fit to experimental and numerical results

    ENABLING EFFICIENT FLEET COMPOSITION SELECTION THROUGH THE DEVELOPMENT OF A RANK HEURISTIC FOR A BRANCH AND BOUND METHOD

    Get PDF
    In the foreseeable future, autonomous mobile robots (AMRs) will become a key enabler for increasing productivity and flexibility in material handling in warehousing facilities, distribution centers and manufacturing systems. The objective of this research is to develop and validate parametric models of AMRs, develop ranking heuristic using a physics-based algorithm within the framework of the Branch and Bound method, integrate the ranking algorithm into a Fleet Composition Optimization (FCO) tool, and finally conduct simulations under various scenarios to verify the suitability and robustness of the developed tool in a factory equipped with AMRs. Kinematic-based equations are used for computing both energy and time consumption. Multivariate linear regression, a data-driven method, is used for designing the ranking heuristic. The results indicate that the unique physical structures and parameters of each robot are the main factors contributing to differences in energy and time consumption. improvement on reducing computation time was achieved by comparing heuristic-based search and non-heuristic-based search. This research is expected to significantly improve the current nested fleet composition optimization tool by reducing computation time without sacrificing optimality. From a practical perspective, greater efficiency in reducing energy and time costs can be achieved.Ford Motor CompanyNo embargoAcademic Major: Aerospace Engineerin

    Advancing Model Pruning via Bi-level Optimization

    Full text link
    The deployment constraints in practical applications necessitate the pruning of large-scale deep learning models, i.e., promoting their weight sparsity. As illustrated by the Lottery Ticket Hypothesis (LTH), pruning also has the potential of improving their generalization ability. At the core of LTH, iterative magnitude pruning (IMP) is the predominant pruning method to successfully find 'winning tickets'. Yet, the computation cost of IMP grows prohibitively as the targeted pruning ratio increases. To reduce the computation overhead, various efficient 'one-shot' pruning methods have been developed, but these schemes are usually unable to find winning tickets as good as IMP. This raises the question of how to close the gap between pruning accuracy and pruning efficiency? To tackle it, we pursue the algorithmic advancement of model pruning. Specifically, we formulate the pruning problem from a fresh and novel viewpoint, bi-level optimization (BLO). We show that the BLO interpretation provides a technically-grounded optimization base for an efficient implementation of the pruning-retraining learning paradigm used in IMP. We also show that the proposed bi-level optimization-oriented pruning method (termed BiP) is a special class of BLO problems with a bi-linear problem structure. By leveraging such bi-linearity, we theoretically show that BiP can be solved as easily as first-order optimization, thus inheriting the computation efficiency. Through extensive experiments on both structured and unstructured pruning with 5 model architectures and 4 data sets, we demonstrate that BiP can find better winning tickets than IMP in most cases, and is computationally as efficient as the one-shot pruning schemes, demonstrating 2-7 times speedup over IMP for the same level of model accuracy and sparsity.Comment: Thirty-sixth Conference on Neural Information Processing Systems (NeurIPS 2022

    Qluster: An easy-to-implement generic workflow for robust clustering of health data

    Get PDF
    The exploration of heath data by clustering algorithms allows to better describe the populations of interest by seeking the sub-profiles that compose it. This therefore reinforces medical knowledge, whether it is about a disease or a targeted population in real life. Nevertheless, contrary to the so-called conventional biostatistical methods where numerous guidelines exist, the standardization of data science approaches in clinical research remains a little discussed subject. This results in a significant variability in the execution of data science projects, whether in terms of algorithms used, reliability and credibility of the designed approach. Taking the path of parsimonious and judicious choice of both algorithms and implementations at each stage, this article proposes Qluster, a practical workflow for performing clustering tasks. Indeed, this workflow makes a compromise between (1) genericity of applications (e.g. usable on small or big data, on continuous, categorical or mixed variables, on database of high-dimensionality or not), (2) ease of implementation (need for few packages, few algorithms, few parameters, ...), and (3) robustness (e.g. use of proven algorithms and robust packages, evaluation of the stability of clusters, management of noise and multicollinearity). This workflow can be easily automated and/or routinely applied on a wide range of clustering projects. It can be useful both for data scientists with little experience in the field to make data clustering easier and more robust, and for more experienced data scientists who are looking for a straightforward and reliable solution to routinely perform preliminary data mining. A synthesis of the literature on data clustering as well as the scientific rationale supporting the proposed workflow is also provided. Finally, a detailed application of the workflow on a concrete use case is provided, along with a practical discussion for data scientists. An implementation on the Dataiku platform is available upon request to the authors
    • …
    corecore