5,960 research outputs found
Decomposition methods for machine learning with small, incomplete or noisy datasets
In many machine learning applications, measurements are sometimes incomplete or noisy resulting in missing features. In other cases, and for different reasons, the datasets are originally small, and therefore, more data samples are required to derive useful supervised or unsupervised classification methods. Correct handling of incomplete, noisy or small datasets in machine learning is a fundamental and classic challenge. In this article, we provide a unified review of recently proposed methods based on signal decomposition for missing features imputation (data completion), classification of noisy samples and artificial generation of new data samples (data augmentation). We illustrate the application of these signal decomposition methods in diverse selected practical machine learning examples including: brain computer interface, epileptic intracranial electroencephalogram signals classification, face recognition/verification and water networks data analysis. We show that a signal decomposition approach can provide valuable tools to improve machine learning performance with low quality datasets.Fil: Caiafa, César Federico. Provincia de Buenos Aires. Gobernación. Comisión de Investigaciones Científicas. Instituto Argentino de Radioastronomía. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - La Plata. Instituto Argentino de Radioastronomía; ArgentinaFil: Sole Casals, Jordi. Center for Advanced Intelligence; JapónFil: Marti Puig, Pere. University of Catalonia; EspañaFil: Sun, Zhe. RIKEN; JapónFil: Tanaka,Toshihisa. Tokyo University of Agriculture and Technology; Japó
Individual Differences in Sound-in-Noise Perception Are Related to the Strength of Short-Latency Neural Responses to Noise
Important sounds can be easily missed or misidentified in the presence of extraneous noise. We describe an auditory illusion in which a continuous ongoing tone becomes inaudible during a brief, non-masking noise burst more than one octave away, which is unexpected given the frequency resolution of human hearing. Participants strongly susceptible to this illusory discontinuity did not perceive illusory auditory continuity (in which a sound subjectively continues during a burst of masking noise) when the noises were short, yet did so at longer noise durations. Participants who were not prone to illusory discontinuity showed robust early electroencephalographic responses at 40–66 ms after noise burst onset, whereas those prone to the illusion lacked these early responses. These data suggest that short-latency neural responses to auditory scene components reflect subsequent individual differences in the parsing of auditory scenes
Recommended from our members
Strategies for Devising Automatic Signal Recognition Algorithms in a Shared Radio Environment
In an increasingly congested and complex radio environment interference is to be expected, which poses problems for Automatic Signal Recognition (ASR) systems.
This thesis explores strategies for improving ASR performance in the presence of interference. The thesis breaks the overall research question down into a number of subquestions and explores each of these in turn. A Phase-symmetric Cross Recurrence Plot is developed and used to show how a radio signal can be manipulated to separate information about the modulation from the information being carried. The Logarithmic Cyclic frequency Domain Profile is introduced to illustrate how a logarithmic representation can be used for analysing mixtures of signals with very different cyclic frequencies. After defining a canonical ASR system architecture, the concepts of an Ideal Feature and Interference Selectivity are introduced and applied to typical features used in ASR processing. Finally it is shown how these algorithmic developments can be combined in a Bayesian chain implementation that can accommodate a wide variety of feature extraction algorithms.
It is concluded that future ASR systems will require features that can handle a wide range of signal types with much higher levels of interference selectivity if they are to achieve acceptable performance in shared spectrum bands. Intelligent segmentation is shown to be a requirement for future ASR systems unless features can be developed that have near ideal performance
Recommended from our members
Array Architectures and Physical Layer Design for Millimeter-Wave Communications Beyond 5G
Ever increasing demands in mobile data rates have resulted in exploration of millimeter-wave (mmW) frequencies for the next generation (5G) wireless networks. Communications at mmW frequencies is presented with two keys challenges. Firstly, high propagation loss requires base stations (BSs) and user equipment (UEs) to use a large number of antennas and narrow beams to close the link with sufficient received signal power. Consequently, communications using narrow beams create a new challenge in channel estimation and link establishment based on fine angular probing. Current mmW system use analog phased arrays that can probe only one angle at the time which results in high latency during link establishment and channel tracking. It is desirable to design low latency beam training by exploring both physical layer designs and array architectures that could replace current 5G approaches and pave the way to the communications for frequency bands in higher mmW band and sub-THz region where larger antenna arrays and communications bandwidth can be exploited. To this end, we propose a novel signal processing techniques exploiting unique properties of mmW channel, and show both theoretically, in simulation and experiments its advantages over conventional approaches. Secondly, we explore different array architecture design and analyze their trade-offs between spectral efficiency and power consumption and area. For comprehensive comparison, we have developed a methodology for optimal design of system parameters for different array architecture candidates based on the spectral efficiency target, and use these parameters to estimate the array area and power consumption based on the circuits reported in the literature. We show that the hybrid analog and digital architectures have severe scalability concerns in radio frequency signal distribution with increased array size and spatial multiplexing levels, while the fully-digital array architectures have the best performance and power/area trade-offs.The developed approaches are based on a cross-disciplinary research that combines innovation in model based signal processing, machine learning, and radio hardware. This work is the first to apply compressive sensing (CS), a signal processing tool that exploits sparsity of mmW channel model, to accelerate beam training of mmW cellular system. The algorithm is designed to address practical issues including the requirement of cell discovery and synchronization that involves estimation of angular channel together with carrier frequency offset and timing offsets. We have analyzed the algorithm performance in the 5G compliant simulation and showed that an order of magnitude saving is achieved in initial access latency for the desired channel estimation accuracy. Moreover, we are the first to develop and implement a neural network assisted compressive beam alignment to deal with hardware impairments in mmW radios. We have used 60GHz mmW testbed to perform experiments and show that neural networks approach enhances alignment rate compared to CS. To further accelerate beam training, we proposed a novel frequency selective probing beams using the true-time-delay (TTD) analog array architecture. Our approach utilizes different subcarriers to scan different directions, and achieves a single-shot beam alignment, the fastest approach reported to date. Our comprehensive analysis of different array architectures and exploration of emerging architectures enabled us to develop an order of magnitude faster and energy efficient approaches for initial access and channel estimation in mmW systems
RAPID CLOCK RECOVERY ALGORITHMS FOR DIGITAL MAGNETIC RECORDING AND DATA COMMUNICATIONS
SIGLEAvailable from British Library Document Supply Centre-DSC:DXN024293 / BLDSC - British Library Document Supply CentreGBUnited Kingdo
Black-hole binaries, gravitational waves, and numerical relativity
Understanding the predictions of general relativity for the dynamical
interactions of two black holes has been a long-standing unsolved problem in
theoretical physics. Black-hole mergers are monumental astrophysical events,
releasing tremendous amounts of energy in the form of gravitational radiation,
and are key sources for both ground- and space-based gravitational-wave
detectors. The black-hole merger dynamics and the resulting gravitational
waveforms can only be calculated through numerical simulations of Einstein's
equations of general relativity. For many years, numerical relativists
attempting to model these mergers encountered a host of problems, causing their
codes to crash after just a fraction of a binary orbit could be simulated.
Recently, however, a series of dramatic advances in numerical relativity has
allowed stable, robust black-hole merger simulations. This remarkable progress
in the rapidly maturing field of numerical relativity, and the new
understanding of black-hole binary dynamics that is emerging is chronicled.
Important applications of these fundamental physics results to astrophysics, to
gravitational-wave astronomy, and in other areas are also discussed.Comment: 54 pages, 42 figures. Some typos corrected & references updated.
Essentially final published versio
- …