11 research outputs found

    Efficient algorithms for arbitrary sample rate conversion with application to wave field synthesis

    Get PDF
    Arbitrary sample rate conversion (ASRC) is used in many fields of digital signal processing to alter the sampling rate of discrete-time signals by arbitrary, potentially time-varying ratios. This thesis investigates efficient algorithms for ASRC and proposes several improvements. First, closed-form descriptions for the modified Farrow structure and Lagrange interpolators are derived that are directly applicable to algorithm design and analysis. Second, efficient implementation structures for ASRC algorithms are investigated. Third, this thesis considers coefficient design methods that are optimal for a selectable error norm and optional design constraints. Finally, the performance of different algorithms is compared for several performance metrics. This enables the selection of ASRC algorithms that meet the requirements of an application with minimal complexity. Wave field synthesis (WFS), a high-quality spatial sound reproduction technique, is the main application considered in this work. For WFS, sophisticated ASRC algorithms improve the quality of moving sound sources. However, the improvements proposed in this thesis are not limited to WFS, but applicable to general-purpose ASRC problems.Verfahren zur unbeschränkten Abtastratenwandlung (arbitrary sample rate conversion,ASRC) ermöglichen die Änderung der Abtastrate zeitdiskreter Signale um beliebige, zeitvarianteVerhältnisse. ASRC wird in vielen Anwendungen digitaler Signalverarbeitung eingesetzt.In dieser Arbeit wird die Verwendung von ASRC-Verfahren in der Wellenfeldsynthese(WFS), einem Verfahren zur hochqualitativen, räumlich korrekten Audio-Wiedergabe, untersucht.Durch ASRC-Algorithmen kann die Wiedergabequalität bewegter Schallquellenin WFS deutlich verbessert werden. Durch die hohe Zahl der in einem WFS-Wiedergabesystembenötigten simultanen ASRC-Operationen ist eine direkte Anwendung hochwertigerAlgorithmen jedoch meist nicht möglich.Zur Lösung dieses Problems werden verschiedene Beiträge vorgestellt. Die Komplexitätder WFS-Signalverarbeitung wird durch eine geeignete Partitionierung der ASRC-Algorithmensignifikant reduziert, welche eine effiziente Wiederverwendung von Zwischenergebnissenermöglicht. Dies erlaubt den Einsatz hochqualitativer Algorithmen zur Abtastratenwandlungmit einer Komplexität, die mit der Anwendung einfacher konventioneller ASRCAlgorithmenvergleichbar ist. Dieses Partitionierungsschema stellt jedoch auch zusätzlicheAnforderungen an ASRC-Algorithmen und erfordert Abwägungen zwischen Performance-Maßen wie der algorithmischen Komplexität, Speicherbedarf oder -bandbreite.Zur Verbesserung von Algorithmen und Implementierungsstrukturen für ASRC werdenverschiedene Maßnahmen vorgeschlagen. Zum Einen werden geschlossene, analytischeBeschreibungen für den kontinuierlichen Frequenzgang verschiedener Klassen von ASRCStruktureneingeführt. Insbesondere für Lagrange-Interpolatoren, die modifizierte Farrow-Struktur sowie Kombinationen aus Überabtastung und zeitkontinuierlichen Resampling-Funktionen werden kompakte Darstellungen hergeleitet, die sowohl Aufschluss über dasVerhalten dieser Filter geben als auch eine direkte Verwendung in Design-Methoden ermöglichen.Einen zweiten Schwerpunkt bildet das Koeffizientendesign für diese Strukturen, insbesonderezum optimalen Entwurf bezüglich einer gewählten Fehlernorm und optionaler Entwurfsbedingungenund -restriktionen. Im Gegensatz zu bisherigen Ansätzen werden solcheoptimalen Entwurfsmethoden auch für mehrstufige ASRC-Strukturen, welche ganzzahligeÜberabtastung mit zeitkontinuierlichen Resampling-Funktionen verbinden, vorgestellt.Für diese Klasse von Strukturen wird eine Reihe angepasster Resampling-Funktionen vorgeschlagen,welche in Verbindung mit den entwickelten optimalen Entwurfsmethoden signifikanteQualitätssteigerungen ermöglichen.Die Vielzahl von ASRC-Strukturen sowie deren Design-Parameter bildet eine Hauptschwierigkeitbei der Auswahl eines für eine gegebene Anwendung geeigneten Verfahrens.Evaluation und Performance-Vergleiche bilden daher einen dritten Schwerpunkt. Dazu wirdzum Einen der Einfluss verschiedener Entwurfsparameter auf die erzielbare Qualität vonASRC-Algorithmen untersucht. Zum Anderen wird der benötigte Aufwand bezüglich verschiedenerPerformance-Metriken in Abhängigkeit von Design-Qualität dargestellt.Auf diese Weise sind die Ergebnisse dieser Arbeit nicht auf WFS beschränkt, sondernsind in einer Vielzahl von Anwendungen unbeschränkter Abtastratenwandlung nutzbar

    Uncertainties in the Estimation of the Shear-Wave Velocity and the Small-Strain Damping Ratio from Surface Wave Analysis

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Effects of errorless learning on the acquisition of velopharyngeal movement control

    Get PDF
    Session 1pSC - Speech Communication: Cross-Linguistic Studies of Speech Sound Learning of the Languages of Hong Kong (Poster Session)The implicit motor learning literature suggests a benefit for learning if errors are minimized during practice. This study investigated whether the same principle holds for learning velopharyngeal movement control. Normal speaking participants learned to produce hypernasal speech in either an errorless learning condition (in which the possibility for errors was limited) or an errorful learning condition (in which the possibility for errors was not limited). Nasality level of the participants’ speech was measured by nasometer and reflected by nasalance scores (in %). Errorless learners practiced producing hypernasal speech with a threshold nasalance score of 10% at the beginning, which gradually increased to a threshold of 50% at the end. The same set of threshold targets were presented to errorful learners but in a reversed order. Errors were defined by the proportion of speech with a nasalance score below the threshold. The results showed that, relative to errorful learners, errorless learners displayed fewer errors (50.7% vs. 17.7%) and a higher mean nasalance score (31.3% vs. 46.7%) during the acquisition phase. Furthermore, errorless learners outperformed errorful learners in both retention and novel transfer tests. Acknowledgment: Supported by The University of Hong Kong Strategic Research Theme for Sciences of Learning © 2012 Acoustical Society of Americapublished_or_final_versio

    Perceptually Optimized Visualization on Autostereoscopic 3D Displays

    Get PDF
    The family of displays, which aims to visualize a 3D scene with realistic depth, are known as "3D displays". Due to technical limitations and design decisions, such displays create visible distortions, which are interpreted by the human vision as artefacts. In absence of visual reference (e.g. the original scene is not available for comparison) one can improve the perceived quality of the representations by making the distortions less visible. This thesis proposes a number of signal processing techniques for decreasing the visibility of artefacts on 3D displays. The visual perception of depth is discussed, and the properties (depth cues) of a scene which the brain uses for assessing an image in 3D are identified. Following the physiology of vision, a taxonomy of 3D artefacts is proposed. The taxonomy classifies the artefacts based on their origin and on the way they are interpreted by the human visual system. The principles of operation of the most popular types of 3D displays are explained. Based on the display operation principles, 3D displays are modelled as a signal processing channel. The model is used to explain the process of introducing distortions. It also allows one to identify which optical properties of a display are most relevant to the creation of artefacts. A set of optical properties for dual-view and multiview 3D displays are identified, and a methodology for measuring them is introduced. The measurement methodology allows one to derive the angular visibility and crosstalk of each display element without the need for precision measurement equipment. Based on the measurements, a methodology for creating a quality profile of 3D displays is proposed. The quality profile can be either simulated using the angular brightness function or directly measured from a series of photographs. A comparative study introducing the measurement results on the visual quality and position of the sweet-spots of eleven 3D displays of different types is presented. Knowing the sweet-spot position and the quality profile allows for easy comparison between 3D displays. The shape and size of the passband allows depth and textures of a 3D content to be optimized for a given 3D display. Based on knowledge of 3D artefact visibility and an understanding of distortions introduced by 3D displays, a number of signal processing techniques for artefact mitigation are created. A methodology for creating anti-aliasing filters for 3D displays is proposed. For multiview displays, the methodology is extended towards so-called passband optimization which addresses Moiré, fixed-pattern-noise and ghosting artefacts, which are characteristic for such displays. Additionally, design of tuneable anti-aliasing filters is presented, along with a framework which allows the user to select the so-called 3d sharpness parameter according to his or her preferences. Finally, a set of real-time algorithms for view-point-based optimization are presented. These algorithms require active user-tracking, which is implemented as a combination of face and eye-tracking. Once the observer position is known, the image on a stereoscopic display is optimised for the derived observation angle and distance. For multiview displays, the combination of precise light re-direction and less-precise face-tracking is used for extending the head parallax. For some user-tracking algorithms, implementation details are given, regarding execution of the algorithm on a mobile device or on desktop computer with graphical accelerator

    Condition monitoring for a neutral beam heating system

    Get PDF
    This thesis presents the design of a condition monitoring scheme for the neutral beam cryogenic pumping system deployed in the Joint European Torus. The performance of the scheme is demonstrated by analysing its response to a range of fault scenarios. Condition monitoring has been successfully used in a diverse range of industries, from rail transport, to commercial power generation, to semiconductor manufacturing, among others. The application of model based condition monitoring to fusion applications has, however, been very limited. Given the importance of improving the availability of fusion devices, it was hypothesised that model based condition monitoring techniques could be used to good effect for this application. This provided the motivation for this research, which had the ultimate objective of demonstrating the usefulness of model based condition monitoring for fusion devices. The cryogenic pumping system used in the neutral beam heating devices operated by the project sponsor, the Culham Centre for Fusion Energy, was selected as the target for a demonstration condition monitoring scheme. This choice of target system was made and justified by the author through an analysis of its role in the neutral beam devices. The relative merits of several model based approaches were investigated. An observer based residual generation scheme, utilising a Kalman filter bank and residual thresholding arrangement was determined to be most suitable. A novel, accurate non-linear simulation model of the cryogenic pumping system was developed to act as a surrogate plant during the research, to facilitate the design and test procedure. This model was validated using historical process data. Two system identification techniques were used to obtain a set of linear models of the system for use in the Kalman filter bank. The scheme was tested by using the non-linear model to simulate ten different faults, all with unique failure modes. Two residual thresholding arrangements were tested and their performance was analysed to find the arrangement with the best performance. It was found that both variations of the scheme could detect all ten faults. The scheme using dual thresholds to check both the direction and magnitude of the residual signals was, however, better at isolating specific faults. The non-linear simulation model developed during the research was proven to be a genuine representation of the plant, by validating its response using historical process data. As such, it could be used in the future as the basis for a model based control system design procedure. The effectiveness of the scheme at detecting a range of faults which can arise in neutral beam heating systems supports the case for the future use of model based condition monitoring in nuclear fusion research

    First results from the HAYSTAC axion search

    Full text link
    The axion is a well-motivated cold dark matter (CDM) candidate first postulated to explain the absence of CPCP violation in the strong interactions. CDM axions may be detected via their resonant conversion into photons in a "haloscope" detector: a tunable high-QQ microwave cavity maintained at cryogenic temperature, immersed a strong magnetic field, and coupled to a low-noise receiver. This dissertation reports on the design, commissioning, and first operation of the Haloscope at Yale Sensitive to Axion CDM (HAYSTAC), a new detector designed to search for CDM axions with masses above 2020 μeV\mu\mathrm{eV}. I also describe the analysis procedure developed to derive limits on axion CDM from the first HAYSTAC data run, which excluded axion models with two-photon coupling gaγγ2×1014g_{a\gamma\gamma} \gtrsim 2\times10^{-14} GeV1\mathrm{GeV}^{-1}, a factor of 2.3 above the benchmark KSVZ model, over the mass range 23.55<ma<24.023.55 < m_a < 24.0 μeV\mu\mathrm{eV}. This result represents two important achievements. First, it demonstrates cosmologically relevant sensitivity an order of magnitude higher in mass than any existing direct limits. Second, by incorporating a dilution refrigerator and Josephson parametric amplifier, HAYSTAC has demonstrated total noise approaching the standard quantum limit for the first time in a haloscope axion search.Comment: Ph.D. thesis. 346 pages, 58 figures. A few typos corrected relative to the version submitted to ProQues

    Modeling EMI Resulting from a Signal Via Transition Through Power/Ground Layers

    Get PDF
    Signal transitioning through layers on vias are very common in multi-layer printed circuit board (PCB) design. For a signal via transitioning through the internal power and ground planes, the return current must switch from one reference plane to another reference plane. The discontinuity of the return current at the via excites the power and ground planes, and results in noise on the power bus that can lead to signal integrity, as well as EMI problems. Numerical methods, such as the finite-difference time-domain (FDTD), Moment of Methods (MoM), and partial element equivalent circuit (PEEC) method, were employed herein to study this problem. The modeled results are supported by measurements. In addition, a common EMI mitigation approach of adding a decoupling capacitor was investigated with the FDTD method

    Modelling and analysis of amplitude, phase and synchrony in human brain activity patterns

    Get PDF
    The critical brain hypothesis provides a framework for viewing the human brain as a critical system, which may transmit information, reorganise itself and react to external stimuli efficiently. A critical system incorporates structures at a range of spatial and temporal scales, and may be associated with power law distributions of neuronal avalanches and power law scaling functions. In the temporal domain, the critical brain hypothesis is supported by a power law decay of the autocorrelation function of neurophysiological signals, which indicates the presence of long-range temporal correlations (LRTCs). LRTCs have been found to exist in the amplitude envelope of neurophysiological signals such as EEG, EMG and MEG, which reveal patterns of local synchronisation within neuronal pools. Synchronisation is an important tool for communication in the nervous system and can also exist between disparate regions of the nervous system. In this thesis, inter-regional synchronisation is characterised by the rate of change of phase difference between neurophysiological time series at different neuronal regions and investigated using the novel phase synchrony analysis method. The phase synchrony analysis method is shown to recover the DFA exponents in time series where these are known. The method indicates that LRTCs are present in the rate of change of phase difference between time series derived from classical models of criticality at critical parameters, and in particular the Ising model of ferromagnetism and the Kuramoto model of coupled oscillators. The method is also applied to the Cabral model, in which Kuramoto oscillators with natural frequencies close to those of cortical rhythms are embedded in a network based on brain connectivity. It is shown that LRTCs in the rate of change of phase difference are disrupted when the network properties of the system are reorganised. The presence of LRTCs is assessed using detrended fluctuation analysis (DFA), which assumes the linearity of a log-log plot of detrended fluctuation magnitude. In this thesis it is demonstrated that this assumption does not always hold, and a novel heuristic technique, ML-DFA, is introduced for validating DFA results. Finally, the phase synchrony analysis method is applied to EEG, EMG and MEG time series. The presence of LRTCs in the rate of change of phase difference between time series recorded from the left and right motor cortices are shown to exist during resting state, but to be disrupted by a finger tapping task. The findings of this thesis are interpreted in the light of the critical brain hypothesis, and shown to provide motivation for future research in this area

    Methodology of optical topography measurements for functional brain imaging and the development and implementation of functional optical signal analysis software.

    Get PDF
    Near-infrared spectroscopy (N1RS) has been used extensively in recent years as a non invasive tool for investigating cerebral hemodynamics and oxygenation. The technique exploits the different optical absorption of oxy-haemoglobin and deoxy-haemoglobin in the near infrared region to measure changes in their concentrations in tissue. By making multiple NIRS measurement simultaneously, optical topography (OT) provides spatial maps of the changes in haemoglobin concentration levels from specific regions of the cerebral cortex. The thesis describes several key developments in optical topography studies of functional brain activation. These include the development of a novel data analysis software to process the experimental data and a new statistical methodology for examining the spatial and temporal variance of OT data. The experimental work involved the design of a cognitive task to measure the haemodynamic response using a 24-channeI Hitachi ETG-100 OT system. Following a series of pilot studies, a study on twins with opposite handedness was conducted to study the functional changes in the parietal region of the brain. Changes in systemic variables were also investigated. A dynamic phantom with optical properties similar to those of biological tissues was developed with the use of liquid crystals to simulate spatially varying changes in haemodynamics. A new software tool was developed to provide a flexible processing approach with real time analysis of the optical signals and advanced statistical analysis. Unlike conventional statistical measures which compare a pre-defined activation and task periods, the thesis describes the incorporation of a Statistical Parametric Mapping toolbox which enables statistical inference about the spatially-resolved topographic data to be made. The use of the general linear model computes the temporal correlations between the defined model and optical signals but also corrects for the spatial correlations between neighbouring measurement points. The issues related to collecting functional activation data using optical topography are fully discussed with a view that the work presented in this thesis will extend the applicability of this technology
    corecore