435 research outputs found

    Hybrid Wavelet-Support Vector Classifiers

    Full text link
    The Support Vector Machine (SVM) represents a new and very promising technique for machine learning tasks involving classification, regression or novelty detection. Improvements of its generalization ability can be achieved by incorporating prior knowledge of the task at hand. We propose a new hybrid algorithm consisting of signal-adapted wavelet decompositions and SVMs for waveform classification. The adaptation of the wavelet decompositions is tailormade for SVMs with radial basis functions as kernels. It allows the optimization Of the representation of the data before training the SVM and does not suffer from computationally expensive validation techniques. We assess the performance of our algorithm against the background of current concerns in medical diagnostics, namely the classification of endocardial electrograms and the detection of otoacoustic emissions. Here the performance of SVMs can significantly be improved by our adapted preprocessing step

    Exascale computer system design : the square kilometre array

    Get PDF

    Aerospace medicine and biology: A continuing bibliography with indexes (supplement 361)

    Get PDF
    This bibliography lists 141 reports, articles and other documents introduced into the NASA Scientific and Technical Information System during Mar. 1992. Subject coverage includes: aerospace medicine and physiology, life support systems and man/system technology, protective clothing, exobiology and extraterrestrial life, planetary biology, and flight crew behavior and performance

    Design of a novel X-section architecture for FX-correlator in large interferometers : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Auckland, New Zealand

    Get PDF
    Figures 2-12 and 2-17 are re-used under CC BY-NC 4.0 International & CC 3.0 Unported Licences respectively.Published journal papers I-III in the Appendices were removed because they are subject to copyright restrictions.In large radio-interferometers it is considerably challenging to perform signal correlations at input data-rates of over 11 Tbps, which involves vast amount of storage, memory bandwidth and computational hardware. The primary objective of this research work is to focus on reducing the memory-access and design complexity in matrix architectural Big Data processing of the complex X-section of an FX-correlator employed in large array radio-telescopes. This thesis presents a dedicated correlator-system-multiplier-and -accumulator (CoSMAC) cell architecture based on the real input samples from antenna arrays which produces two 16-bit complex multiplications in the same clock cycle. The novel correlator cell optimization is achieved by utilizing the flipped mirror relationship between Discrete Fourier transform (DFT) samples owing to the symmetry and periodicity of the DFT coefficient vectors. The proposed CoSMAC structure is extended to build a new processing element (PE) which calculates both cross- correlation visibilities and auto-correlation functions simultaneously. Further, a novel mathematical model and a hardware design is derived to calculate two visibilities per baseline for the Quadrature signals (IQ sampled signals, where I is In-phase signal and Q is the 90 degrees phase shifted signal) named as Processing Element for IQ sampled signals (PE_IQ). These three proposed dedicated correlator cells minimise the number of visibility calculations in a baseline. The design methodology also targets the optimisation of the multiplier size in order to reduce the power and area further in the CoSMAC, PE and PE_IQ. Various fast and efficient multiplier algorithms are compared and combined to achieve a novel multiplier named Modified-Booth-Wallace-Multiplier and implemented in the CoSMAC and PE cells. The dedicated multiplier is designed to mostly target the area and power optimisations without degrading the performance. The conventional complex-multiplier-and-accumulators (CMACs) employed to perform the complex multiplications are replaced with these dedicated ASIC correlator cells along with the optimized multipliers to reduce the overall power and area requirements in a matrix correlator architecture. The proposed architecture lowers the number of ASIC processor cells required to calculate the overall baselines in an interferometer by eliminating the redundant cells. Hence the new matrix architectural minimization is very effective in reducing the hardware complexity by nearly 50% without affecting the overall speed and performance of very large interferometers like the Square Kilometre Array (SKA)

    Gigahertz Bandwidth and Nanosecond Timescales: New Frontiers in Radio Astronomy Through Peak Performance Signal Processing

    Get PDF
    Abstract In the past decade, there has been a revolution in radio-astronomy signal processing. High bandwidth receivers coupled with fast ADCs have enabled the collection of tremendous instantaneous bandwidth, but streaming computational resources are struggling to catch up and serve these new capabilities. As a consequence, there is a need for novel signal processing algorithms capable of maximizing these resources. This thesis responds to the demand by presenting FPGA implementations of a Polyphase Filter Bank which are an order of magnitude more efficient than previous algorithms while exhibiting similar noise performance. These algorithms are showcased together alongside a broadband RF front-end in Starburst: a 5 GHz instantaneous bandwidth two-element interferometer, the first broadband digital sideband separating astronomical interferometer.  Starburst technology has been applied to three instruments to date. Abstract Wielding tremendous computational power and precisely calibrated hardware, low frequency radio telescope arrays have potential greatly exceeding their current applications.  This thesis presents new modes for low frequency radio-telescopes, dramatically extending their original capabilities.  A microsecond-scale time/frequency mode empowered the Owens Valley Long Wavelength Array to inspect not just the radio sky by enabling the testing of novel imaging techniques and detecting overhead beacon satellites, but also the terrestrial neighborhood, allowing for the characterization and mitigation of nearby sources of radio frequency interference (RFI).  This characterization led to insights prompting a nanosecond-scale observing mode to be developed, opening new avenues in high energy astrophysics, specifically related to the radio frequency detection of ultra-high energy cosmic rays and neutrinos. Abstract Measurement of the flux spectrum, composition, and origin of the highest energy cosmic ray events is a lofty goal in high energy astrophysics. One of the most powerful new windows has been the detection of associated extensive air showers at radio frequencies. However, all current ground-based systems must trigger off an expensive and insensitive external source such as particle detectors - making detection of the rare, high energy events uneconomical.  Attempts to make a direct detection in radio-only data have been unsuccessful despite numerous efforts. The problem is even more severe in the case of radio detection of ultra-high energy neutrino events, which cannot rely on in-situ particle detectors as a triggering mechanism. This thesis combines the aforementioned nanosecond-scale observing mode with real-time, on-FPGA RFI mitigation and sophisticated offline post-processing.  The resulting system has produced the first successful ground based detection of cosmic rays using only radio instruments. Design and measurements of cosmic ray detections are discussed, as well as recommendations for future cosmic ray experiments.  The presented future designs allow for another order of magnitude improvement in both sensitivity and output data-rate, paving the way for the economical ground-based detection of the highest energy neutrinos.</p

    A test platform for measuring the energy efficiency of AC induction motors under various loading conditions and control schemes/

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, June 2012."May 2012." Cataloged from PDF version of thesis.Includes bibliographical references (p. 63).A test platform was developed to measure and compare the energy efficiency of an AC induction motor under steady-state and cyclical loading conditions while operating in both a constant speed mode and while performing speed to speed transitions. The details of the construction are provided. The motor under test is fully characterized and modeled in order to establish theoretical bounds for maximum efficiency operation. In addition, several custom motor controllers were created and the specifics of their implementation are given. Results from tests on both commercial and custom controllers show the test platform to be a valuable tool for characterizing the energy efficiency of the AC induction motor while subjected to various loading conditions under the control of the different motor controllers.by John A. Granata.M.Eng

    Sampling from a system-theoretic viewpoint

    Get PDF
    This paper studies a system-theoretic approach to the problem of reconstructing an analog signal from its samples. The idea, borrowed from earlier treatments in the control literature, is to address the problem as a hybrid model-matching problem in which performance is measured by system norms. \ud \ud The paper is split into three parts. In Part I we present the paradigm and revise the lifting technique, which is our main technical tool. In Part II optimal samplers and holds are designed for various analog signal reconstruction problems. In some cases one component is fixed while the remaining are designed, in other cases all three components are designed simultaneously. No causality requirements are imposed in Part II, which allows to use frequency domain arguments, in particular the lifted frequency response as introduced in Part I. In Part III the main emphasis is placed on a systematic incorporation of causality constraints into the optimal design of reconstructors. We consider reconstruction problems, in which the sampling (acquisition) device is given and the performance is measured by the L2L^2-norm of the reconstruction error. The problem is solved under the constraint that the optimal reconstructor is ll-causal for a given l0,l\geq 0, i.e., that its impulse response is zero in the time interval (,lh),(-\infty,-l h), where hh is the sampling period. We derive a closed-form state-space solution of the problem, which is based on the spectral factorization of a rational transfer function
    corecore