4,584 research outputs found

    Porometry, porosimetry, image analysis and void network modelling in the study of the pore-level properties of filters

    Get PDF
    We present fundamental and quantitative comparisons between the techniques of porometry (or flow permporometry), porosimetry, image analysis and void network modelling for seven types of filter, chosen to encompass the range of simple to complex void structure. They were metal, cellulose and glass fibre macro- and meso-porous filters of various types. The comparisons allow a general re-appraisal of the limitations of each technique for measuring void structures. Porometry is shown to give unrealistically narrow void size distributions, but the correct filtration characteristic when calibrated. Shielded mercury porosimetry can give the quaternary (sample-level anisotropic) characteristics of the void structure. The first derivative of a mercury porosimetry intrusion curve is shown to underestimate the large number of voids, but this error can be largely corrected by the use of a void network model. The model was also used to simulate the full filtration characteristic of each sample, which agreed with the manufacturer's filtration ratings. The model was validated through its correct a priori simulation of absolute gas permeabilities for track etch, cellulose nitrate and sintered powder filters. © 2011 Elsevier Ltd

    Using a deep neural network to speed up a model of loudness for time-varying sounds

    Get PDF
    The “time-varying loudness (TVL)” model calculates “instantaneous loudness” every 1 ms, and this is used to generate predictions of short-term loudness, the loudness of a short segment of sound such as a word in a sentence, and of long-term loudness, the loudness of a longer segment of sound, such as a whole sentence. The calculation of instantaneous loudness is computationally intensive and real-time implementation of the TVL model is difficult. To speed up the computation, a deep neural network (DNN) has been trained to predict instantaneous loudness using a large database of speech sounds and artificial sounds (tones alone and tones in white or pink noise), with the predictions of the TVL model as a reference (providing the "correct" answer, specifically the loudness level in phons). A multilayer perceptron with three hidden layers was found to be sufficient, with more complex DNN architecture not yielding higher accuracy. After training, the deviations between the predictions of the TVL model and the predictions of the DNN were typically less than 0.5 phons, even for types of sounds that were not used for training (music, rain, animal sounds, washing machine). The DNN calculates instantaneous loudness over 100 times more quickly than the TVL model

    The implementation of efficient hearing tests using machine learning

    Get PDF
    Time-efficient hearing tests are important in both clinical practice and research studies. Bayesian active learning (BAL) methods were first proposed in the 1990s. We developed BAL methods for measuring the audiogram, conducting notched-noise tests, determination of the edge frequency of a dead region (fe), and estimating equal-loudness contours. The methods all use a probabilistic model of the outcome, which can be classification (audible/inaudible), regression (loudness) or model parameters (fe, outer hair cell loss at fe). The stimulus parameters for the next trial (e.g. frequency, level) are chosen to yield maximum reduction in the uncertainty of the parameters of the probabilistic model. The approach reduced testing time by a factor of about 5 and, for some tests, yielded results on a continuous frequency scale. For example, auditory filter shapes can be estimated for centre frequencies from 500 to 4000 Hz in 20-30 minutes. The probabilistic modelling allows quantitative comparison of different methods. For audiogram determination, asking subjects to count the number of audible tones in a sequence with decreasing level was slightly more efficient than requiring Yes/No responses. Counting tones yielded higher variance for a single response, but this was offset by the higher information per trial

    The chronostratigraphy of the Anthropocene in southern Africa: Current status and potential

    Get PDF
    The process for the formal ratification of the proposed Anthropocene Epoch involves the identification of a globally isochronous stratigraphic signal to mark its starting point. The search for a Global Boundary Stratotype Section and Point (GSSP), a unique reference sequence that would be used to fix the start of the epoch, is in progress but none of the candidate sections are located in Africa. We assessed the currently available stratigraphic evidence for the possible markers of the Anthropocene in southern Africa and found that, although most markers have been identified in the region, the robustly dated, high resolution records required for the GSSP are very sparse. We then assessed the extent and stratigraphic resolution of a range of potential natural archives and conclude that a small number of permanent lakes, as well as marine sediments, corals and peats from selected locations in southern Africa could provide the temporal resolution required. With sufficient chronological control and multi-proxy analyses, one of these archives could provide a useful auxiliary stratotype thereby helping to confirm the global reach, and extending the utility, of the selected Anthropocene GSSP

    Application of Bayesian Active Learning to the Estimation of Auditory Filter Shapes Using the Notched-Noise Method.

    Get PDF
    Time-efficient hearing tests are important in both clinical practice and research studies. This particularly applies to notched-noise tests, which are rarely done in clinical practice because of the time required. Auditory-filter shapes derived from notched-noise data may be useful for diagnosis of the cause of hearing loss and for fitting of hearing aids, especially if measured over a wide range of center frequencies. To reduce the testing time, we applied Bayesian active learning (BAL) to the notched-noise test, picking the most informative stimulus parameters for each trial based on nine Gaussian Processes. A total of 11 hearing-impaired subjects were tested. In 20 to 30 min, the test provided estimates of signal threshold as a continuous function of frequency from 500 to 4000 Hz for nine notch widths and for notches placed both symmetrically and asymmetrically around the signal frequency. The thresholds were found to be consistent with those obtained using a 2-up/1-down forced-choice procedure at a single center frequency. In particular, differences in threshold between the methods did not vary with notch width. An independent second run of the BAL test for one notch width showed that it is reliable. The data derived from the BAL test were used to estimate auditory-filter width and asymmetry and detection efficiency for center frequencies from 500 to 4000 Hz. The results agreed with expectations for cochlear hearing losses that were derived from the audiogram and a hearing model

    Stabilizing two-dimensional quantum scars by deformation and synchronization

    Get PDF
    Relaxation to a thermal state is the inevitable fate of non-equilibrium interacting quantum systems without special conservation laws. While thermalization in one-dimensional (1D) systems can often be suppressed by integrability mechanisms, in two spatial dimensions thermalization is expected to be far more effective due to the increased phase space. In this work we propose a general framework for escaping or delaying the emergence of the thermal state in two-dimensional (2D) arrays of Rydberg atoms via the mechanism of quantum scars, i.e. initial states that fail to thermalize. The suppression of thermalization is achieved in two complementary ways: by adding local perturbations or by adjusting the driving Rabi frequency according to the local connectivity of the lattice. We demonstrate that these mechanisms allow to realize robust quantum scars in various two-dimensional lattices, including decorated lattices with non-constant connectivity. In particular, we show that a small decrease of the Rabi frequency at the corners of the lattice is crucial for mitigating the strong boundary effects in two-dimensional systems. Our results identify synchronization as an important tool for future experiments on two-dimensional quantum scars

    Toward a New UV Index Diagnostic in the Met Office's Forecast Model

    Get PDF
    This is the final version. Available on open access from AGU via the DOI in this recordThe United Kingdom sporadically experiences low ozone events in the spring which can increase UV to harmful levels and is particularly dangerous as sunburn is not expected by the public at this time of year. This study investigates the benefits to the UV Index diagnostic produced by the UM at the Met Office of including either, or both of, a more highly resolved spectrum, and forecasted ozone profiles from the ECMWF CAMS database. Two new configurations of the spectral parameters governing the radiative transfer calculation over the UV region are formulated using the correlated‐k method to give surface fluxes that are within 0.1 UV Index of an accurate reference scheme. Clear‐sky comparisons of modeled fluxes with ground‐based spectral observations at two UK sites (Reading and Chilton) between 2011 and 2015 show that when raw CAMS ozone profiles are included noontime UV indices are always overestimated, by up to 3 UV indices at a low ozone event and up to 1.5 on a clear summer day, suggesting CAMS ozone concentrations are too low. The new spectral parameterizations reduce UV Index biases, apart from when combined with ozone profiles that are significantly underestimated. When the same biases are examined spectrally across the UV region some low biases on low ozone days are found to be the result of compensating errors in different parts of the spectrum. Aerosols are postulated to be an additional source of error if their actual concentrations are higher than those modeled.Department for Environment Food & Rural Affairs (DEFRA

    Slow quantum thermalization and many-body revivals from mixed phase space

    Get PDF
    The relaxation of few-body quantum systems can strongly depend on the initial state when the system’s semiclassical phase space is mixed; i.e., regions of chaotic motion coexist with regular islands. In recent years, there has been much effort to understand the process of thermalization in strongly interacting quantum systems that often lack an obvious semiclassical limit. The time-dependent variational principle (TDVP) allows one to systematically derive an effective classical (nonlinear) dynamical system by projecting unitary many-body dynamics onto a manifold of weakly entangled variational states. We demonstrate that such dynamical systems generally possess mixed phase space. When TDVP errors are small, the mixed phase space leaves a footprint on the exact dynamics of the quantum model. For example, when the system is initialized in a state belonging to a stable periodic orbit or the surrounding regular region, it exhibits persistent many-body quantum revivals. As a proof of principle, we identify new types of “quantum many-body scars,” i.e., initial states that lead to long-time oscillations in a model of interacting Rydberg atoms in one and two dimensions. Intriguingly, the initial states that give rise to most robust revivals are typically entangled states. On the other hand, even when TDVP errors are large, as in the thermalizing tilted-field Ising model, initializing the system in a regular region of phase space leads to a surprising slowdown of thermalization. Our work establishes TDVP as a method for identifying interacting quantum systems with anomalous dynamics in arbitrary dimensions. Moreover, the mixed phase space classical variational equations allow one to find slowly thermalizing initial conditions in interacting models. Our results shed light on a link between classical and quantum chaos, pointing toward possible extensions of the classical Kolmogorov-Arnold-Moser theorem to quantum systems
    corecore