13,122 research outputs found

    Performance evaluation of the Hilbert–Huang transform for respiratory sound analysis and its application to continuous adventitious sound characterization

    Get PDF
    © 2016. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/The use of the Hilbert–Huang transform in the analysis of biomedical signals has increased during the past few years, but its use for respiratory sound (RS) analysis is still limited. The technique includes two steps: empirical mode decomposition (EMD) and instantaneous frequency (IF) estimation. Although the mode mixing (MM) problem of EMD has been widely discussed, this technique continues to be used in many RS analysis algorithms. In this study, we analyzed the MM effect in RS signals recorded from 30 asthmatic patients, and studied the performance of ensemble EMD (EEMD) and noise-assisted multivariate EMD (NA-MEMD) as means for preventing this effect. We propose quantitative parameters for measuring the size, reduction of MM, and residual noise level of each method. These parameters showed that EEMD is a good solution for MM, thus outperforming NA-MEMD. After testing different IF estimators, we propose Kay¿s method to calculate an EEMD-Kay-based Hilbert spectrum that offers high energy concentrations and high time and high frequency resolutions. We also propose an algorithm for the automatic characterization of continuous adventitious sounds (CAS). The tests performed showed that the proposed EEMD-Kay-based Hilbert spectrum makes it possible to determine CAS more precisely than other conventional time-frequency techniques.Postprint (author's final draft

    Algebraic and algorithmic frameworks for optimized quantum measurements

    Get PDF
    Von Neumann projections are the main operations by which information can be extracted from the quantum to the classical realm. They are however static processes that do not adapt to the states they measure. Advances in the field of adaptive measurement have shown that this limitation can be overcome by "wrapping" the von Neumann projectors in a higher-dimensional circuit which exploits the interplay between measurement outcomes and measurement settings. Unfortunately, the design of adaptive measurement has often been ad hoc and setup-specific. We shall here develop a unified framework for designing optimized measurements. Our approach is two-fold: The first is algebraic and formulates the problem of measurement as a simple matrix diagonalization problem. The second is algorithmic and models the optimal interaction between measurement outcomes and measurement settings as a cascaded network of conditional probabilities. Finally, we demonstrate that several figures of merit, such as Bell factors, can be improved by optimized measurements. This leads us to the promising observation that measurement detectors which---taken individually---have a low quantum efficiency can be be arranged into circuits where, collectively, the limitations of inefficiency are compensated for

    Holographic duality from random tensor networks

    Full text link
    Tensor networks provide a natural framework for exploring holographic duality because they obey entanglement area laws. They have been used to construct explicit toy models realizing many interesting structural features of the AdS/CFT correspondence, including the non-uniqueness of bulk operator reconstruction in the boundary theory. In this article, we explore the holographic properties of networks of random tensors. We find that our models naturally incorporate many features that are analogous to those of the AdS/CFT correspondence. When the bond dimension of the tensors is large, we show that the entanglement entropy of boundary regions, whether connected or not, obey the Ryu-Takayanagi entropy formula, a fact closely related to known properties of the multipartite entanglement of assistance. Moreover, we find that each boundary region faithfully encodes the physics of the entire bulk entanglement wedge. Our method is to interpret the average over random tensors as the partition function of a classical ferromagnetic Ising model, so that the minimal surfaces of Ryu-Takayanagi appear as domain walls. Upon including the analog of a bulk field, we find that our model reproduces the expected corrections to the Ryu-Takayanagi formula: the minimal surface is displaced and the entropy is augmented by the entanglement of the bulk field. Increasing the entanglement of the bulk field ultimately changes the minimal surface topologically in a way similar to creation of a black hole. Extrapolating bulk correlation functions to the boundary permits the calculation of the scaling dimensions of boundary operators, which exhibit a large gap between a small number of low-dimension operators and the rest. While we are primarily motivated by AdS/CFT duality, our main results define a more general form of bulk-boundary correspondence which could be useful for extending holography to other spacetimes.Comment: 57 pages, 13 figure

    Nomadic input on mobile devices: the influence of touch input technique and walking speed on performance and offset modeling

    Get PDF
    In everyday life people use their mobile phones on-the-go with different walking speeds and with different touch input techniques. Unfortunately, much of the published research in mobile interaction does not quantify the influence of these variables. In this paper, we analyze the influence of walking speed, gait pattern and input techniques on commonly used performance parameters like error rate, accuracy and tapping speed, and we compare the results to the static condition. We examine the influence of these factors on the machine learned offset model used to correct user input and we make design recommendations. The results show that all performance parameters degraded when the subject started to move, for all input techniques. Index finger pointing techniques demonstrated overall better performance compared to thumb-pointing techniques. The influence of gait phase on tap event likelihood and accuracy was demonstrated for all input techniques and all walking speeds. Finally, it was shown that the offset model built on static data did not perform as well as models inferred from dynamic data, which indicates the speed-specific nature of the models. Also, models identified using specific input techniques did not perform well when tested in other conditions, demonstrating the limited validity of offset models to a particular input technique. The model was therefore calibrated using data recorded with the appropriate input technique, at 75% of preferred walking speed, which is the speed to which users spontaneously slow down when they use a mobile device and which presents a tradeoff between accuracy and usability. This led to an increase in accuracy compared to models built on static data. The error rate was reduced between 0.05% and 5.3% for landscape-based methods and between 5.3% and 11.9% for portrait-based methods

    Measurements in two bases are sufficient for certifying high-dimensional entanglement

    Full text link
    High-dimensional encoding of quantum information provides a promising method of transcending current limitations in quantum communication. One of the central challenges in the pursuit of such an approach is the certification of high-dimensional entanglement. In particular, it is desirable to do so without resorting to inefficient full state tomography. Here, we show how carefully constructed measurements in two bases (one of which is not orthonormal) can be used to faithfully and efficiently certify bipartite high-dimensional states and their entanglement for any physical platform. To showcase the practicality of this approach under realistic conditions, we put it to the test for photons entangled in their orbital angular momentum. In our experimental setup, we are able to verify 9-dimensional entanglement for a pair of photons on a 11-dimensional subspace each, at present the highest amount certified without any assumptions on the state.Comment: 11+14 pages, 2+7 figure
    • …
    corecore