29 research outputs found

    Unsupervised discovery of temporal sequences in high-dimensional datasets, with applications to neuroscience.

    Get PDF
    Identifying low-dimensional features that describe large-scale neural recordings is a major challenge in neuroscience. Repeated temporal patterns (sequences) are thought to be a salient feature of neural dynamics, but are not succinctly captured by traditional dimensionality reduction techniques. Here, we describe a software toolbox-called seqNMF-with new methods for extracting informative, non-redundant, sequences from high-dimensional neural data, testing the significance of these extracted patterns, and assessing the prevalence of sequential structure in data. We test these methods on simulated data under multiple noise conditions, and on several real neural and behavioral datas. In hippocampal data, seqNMF identifies neural sequences that match those calculated manually by reference to behavioral events. In songbird data, seqNMF discovers neural sequences in untutored birds that lack stereotyped songs. Thus, by identifying temporal structure directly from neural data, seqNMF enables dissection of complex neural circuits without relying on temporal references from stimuli or behavioral outputs

    Audio computing in the wild: frameworks for big data and small computers

    Get PDF
    This dissertation presents some machine learning algorithms that are designed to process as much data as needed while spending the least possible amount of resources, such as time, energy, and memory. Examples of those applications, but not limited to, can be a large-scale multimedia information retrieval system where both queries and the items in the database are noisy signals; collaborative audio enhancement from hundreds of user-created clips of a music concert; an event detection system running in a small device that has to process various sensor signals in real time; a lightweight custom chipset for speech enhancement on hand-held devices; instant music analysis engine running on smartphone apps. In all those applications, efficient machine learning algorithms are supposed to achieve not only a good performance, but also a great resource-efficiency. We start from some efficient dictionary-based single-channel source separation algorithms. We can train this kind of source-specific dictionaries by using some matrix factorization or topic modeling, whose elements form a representative set of spectra for the particular source. During the test time, the system estimates the contribution of the participating dictionary items for an unknown mixture spectrum. In this way we can estimate the activation of each source separately, and then recover the source of interest by using that particular source's reconstruction. There are some efficiency issues during this procedure. First off, searching for the optimal dictionary size is time consuming. Although for some very common types of sources, e.g. English speech, we know the optimal rank of the model by trial and error, it is hard to know in advance as to what is the optimal number of dictionary elements for the unknown sources, which are usually modeled during the test time in the semi-supervised separation scenarios. On top of that, when it comes to the non-stationary unknown sources, we had better maintain a dictionary that adapts its size and contents to the change of the source's nature. In this online semi-supervised separation scenario, a mechanism that can efficiently learn the optimal rank is helpful. To this end, a deflation method is proposed for modeling this unknown source with a nonnegative dictionary whose size is optimal. Since it has to be done during the test time, the deflation method that incrementally adds up new dictionary items shows better efficiency than a corresponding na\"ive approach where we simply try a bunch of different models. We have another efficiency issue when we are to use a large dictionary for better separation. It has been known that considering the manifold of the training data can help enhance the performance for the separation. This is because of the symptom that the usual manifold-ignorant convex combination models, such as from low-rank matrix decomposition or topic modeling, tend to result in ambiguous regions in the source-specific subspace defined by the dictionary items as the bases. For example, in those ambiguous regions, the original data samples cannot reside. Although some source separation techniques that respect data manifold could increase the performance, they call for more memory and computational resources due to the fact that the models call for larger dictionaries and involve sparse coding during the test time. This limitation led the development of hashing-based encoding of the audio spectra, so that some computationally heavy routines, such as nearest neighbor searches for sparse coding, can be performed in a cheaper bit-wise fashion. Matching audio signals can be challenging as well, especially if the signals are noisy and the matching task involves a big amount of signals. If it is an information retrieval application, for example, the bigger size of the data leads to a longer response time. On top of that, if the signals are defective, we have to perform the enhancement or separation job in the first place before matching, or we might need a matching mechanism that is robust to all those different kinds of artifacts. Likewise, the noisy nature of signals can add an additional complexity to the system. In this dissertation we will also see some compact integer (and eventually binary) representations for those matching systems. One of the possible compact representations would be a hashing-based matching method, where we can employ a particular kind of hash functions to preserve the similarity among original signals in the hash code domain. We will see that a variant of Winner Take All hashing can provide Hamming distance from noise-robust binary features, and that matching using the hash codes works well for some keyword spotting tasks. From the fact that some landmark hashes (e.g. local maxima from non-maximum suppression on the magnitudes of a mel-scaled spectrogram) can also robustly represent the time-frequency domain signal efficiently, a matrix decomposition algorithm is also proposed to take those irregular sparse matrices as input. Based on the assumption that the number of landmarks is a lot smaller than the number of all the time-frequency coefficients, we can think of this matching algorithm efficient if it operates entirely on the landmark representation. On the contrary to the usual landmark matching schemes, where matching is defined rigorously, we see the audio matching problem as soft matching where we find a similar constellation of landmarks to the query. In order to perform this soft matching job, the landmark positions are smoothed by a fixed-width Gaussian caps, with which the matching job is reduced down to calculating the amount of overlaps in-between those Gaussians. The Gaussian-based density approximation is also useful when we perform decomposition on this landmark representation, because otherwise the landmarks are usually too sparse to perform an ordinary matrix factorization algorithm, which are originally for a dense input matrix. We also expand this concept to the matrix deconvolution problem as well, where we see the input landmark representation of a source as a two-dimensional convolution between a source pattern and its corresponding sparse activations. If there are more than one source, as a noisy signal, we can think of this problem as factor deconvolution where the mixture is the combination of all the source-specific convolutions. The dissertation also covers Collaborative Audio Enhancement (CAE) algorithms that aim to recover the dominant source at a sound scene (e.g. music signals of a concert rather than the noise from the crowd) from multiple low-quality recordings (e.g. Youtube video clips uploaded by the audience). CAE can be seen as crowdsourcing a recording job, which needs a substantial amount of denoising effort afterward, because the user-created recordings might have been contaminated with various artifacts. In the sense that the recordings are from not-synchronized heterogenous sensors, we can also think of CAE as big ad-hoc sensor array processing. In CAE, each recording is assumed to be uniquely corrupted by a specific frequency response of the microphone, an aggressive audio coding algorithm, interference, band-pass filtering, clipping, etc. To consolidate all these recordings and come up with an enhanced audio, Probabilistic Latent Component Sharing (PLCS) has been proposed as a method of simultaneous probabilistic topic modeling on synchronized input signals. In PLCS, some of the parameters are fixed to be same during and after the learning process to capture common audio content, while the rest of the parameters are for the unwanted recording-specific interference and artifacts. We can speed up PLCS by incorporating a hashing-based nearest neighbor search so that at every EM iteration PLCS can be applied only to a small number of recordings that are closest to the current source estimation. Experiments on a small simulated CAE setup shows that the proposed PLCS can improve the sound quality from variously contaminated recordings. The nearest neighbor search technique during PLCS provides sensible speed-up at larger scaled experiments (up to 1000 recordings). Finally, to describe an extremely optimized deep learning deployment system, Bitwise Neural Networks (BNN) will be also discussed. In the proposed BNN, all the input, hidden, and output nodes are binaries (+1 and -1), and so are all the weights and bias. Consequently, the operations on them during the test time are defined with Boolean algebra, too. BNNs are spatially and computationally efficient in implementations, since (a) we represent a real-valued sample or parameter with a bit (b) the multiplication and addition correspond to bitwise XNOR and bit-counting, respectively. Therefore, BNNs can be used to implement a deep learning system in a resource-constrained environment, so that we can deploy a deep learning system on small devices without using up the power, memory, CPU clocks, etc. The training procedure for BNNs is based on a straightforward extension of backpropagation, which is characterized by the use of the quantization noise injection scheme, and the initialization strategy that learns a weight-compressed real-valued network only for the initialization purpose. Some preliminary results on the MNIST dataset and speech denoising demonstrate that a straightforward extension of backpropagation can successfully train BNNs whose performance is comparable while necessitating vastly fewer computational resources

    Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis

    Full text link
    The widespread use of multi-sensor technology and the emergence of big datasets has highlighted the limitations of standard flat-view matrix models and the necessity to move towards more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable such a fundamental paradigm shift towards models that are essentially polynomial and whose uniqueness, unlike the matrix methods, is guaranteed under verymild and natural conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints that match data properties, and to find more general latent components in the data than matrix-based methods. A comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker models, through to advanced cause-effect and multi-view data analysis schemes. We show that tensor decompositions enable natural generalizations of some commonly used signal processing paradigms, such as canonical correlation and subspace techniques, signal separation, linear regression, feature extraction and classification. We also cover computational aspects, and point out how ideas from compressed sensing and scientific computing may be used for addressing the otherwise unmanageable storage and manipulation problems associated with big datasets. The concepts are supported by illustrative real world case studies illuminating the benefits of the tensor framework, as efficient and promising tools for modern signal processing, data analysis and machine learning applications; these benefits also extend to vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker decomposition, HOSVD, tensor networks, Tensor Train

    Advanced tensor based signal processing techniques for wireless communication systems and biomedical signal processing

    Get PDF
    Many observed signals in signal processing applications including wireless communications, biomedical signal processing, image processing, and machine learning are multi-dimensional. Tensors preserve the multi-dimensional structure and provide a natural representation of these signals/data. Moreover, tensors provide often an improved identifiability. Therefore, we benefit from using tensor algebra in the above mentioned applications and many more. In this thesis, we present the benefits of utilizing tensor algebra in two signal processing areas. These include signal processing for MIMO (Multiple-Input Multiple-Output) wireless communication systems and biomedical signal processing. Moreover, we contribute to the theoretical aspects of tensor algebra by deriving new properties and ways of computing tensor decompositions. Often, we only have an element-wise or a slice-wise description of the signal model. This representation of the signal model does not reveal the explicit tensor structure. Therefore, the derivation of all tensor unfoldings is not always obvious. Consequently, exploiting the multi-dimensional structure of these models is not always straightforward. We propose an alternative representation of the element-wise multiplication or the slice-wise multiplication based on the generalized tensor contraction operator. Later in this thesis, we exploit this novel representation and the properties of the contraction operator such that we derive the final tensor models. There exist a number of different tensor decompositions that describe different signal models such as the HOSVD (Higher Order Singular Value Decomposition), the CP/PARAFAC (Canonical Polyadic / PARallel FACtors) decomposition, the BTD (Block Term Decomposition), the PARATUCK2 (PARAfac and TUCker2) decomposition, and the PARAFAC2 (PARAllel FACtors2) decomposition. Among these decompositions, the CP decomposition is most widely spread and used. Therefore, the development of algorithms for the efficient computation of the CP decomposition is important for many applications. The SECSI (Semi-Algebraic framework for approximate CP decomposition via SImultaneaous matrix diagonalization) framework is an efficient and robust tool for the calculation of the approximate low-rank CP decomposition via simultaneous matrix diagonalizations. In this thesis, we present five extensions of the SECSI framework that reduce the computational complexity of the original framework and/or introduce constraints to the factor matrices. Moreover, the PARAFAC2 decomposition and the PARATUCK2 decomposition are usually described using a slice-wise notation that can be expressed in terms of the generalized tensor contraction as proposed in this thesis. We exploit this novel representation to derive explicit tensor models for the PARAFAC2 decomposition and the PARATUCK2 decomposition. Furthermore, we use the PARAFAC2 model to derive an ALS (Alternating Least-Squares) algorithm for the computation of the PARAFAC2 decomposition. Moreover, we exploit the novel contraction properties for element wise and slice-wise multiplications to model MIMO multi-carrier wireless communication systems. We show that this very general model can be used to derive the tensor model of the received signal for MIMO-OFDM (Multiple-Input Multiple-Output - Orthogonal Frequency Division Multiplexing), Khatri-Rao coded MIMO-OFDM, and randomly coded MIMO-OFDM systems. We propose the transmission techniques Khatri-Rao coding and random coding in order to impose an additional tensor structure of the transmit signal tensor that otherwise does not have a particular structure. Moreover, we show that this model can be extended to other multi-carrier techniques such as GFDM (Generalized Frequency Division Multiplexing). Utilizing these models at the receiver side, we design several types for receivers for these systems that outperform the traditional matrix based solutions in terms of the symbol error rate. In the last part of this thesis, we show the benefits of using tensor algebra in biomedical signal processing by jointly decomposing EEG (ElectroEncephaloGraphy) and MEG (MagnetoEncephaloGraphy) signals. EEG and MEG signals are usually acquired simultaneously, and they capture aspects of the same brain activity. Therefore, EEG and MEG signals can be decomposed using coupled tensor decompositions such as the coupled CP decomposition. We exploit the proposed coupled SECSI framework (one of the proposed extensions of the SECSI framework) for the computation of the coupled CP decomposition to first validate and analyze the photic driving effect. Moreover, we validate the effects of scull defects on the measurement EEG and MEG signals by means of a joint EEG-MEG decomposition using the coupled SECSI framework. Both applications show that we benefit from coupled tensor decompositions and the coupled SECSI framework is a very practical tool for the analysis of biomedical data.Zahlreiche messbare Signale in verschiedenen Bereichen der digitalen Signalverarbeitung, z.B. in der drahtlosen Kommunikation, im Mobilfunk, biomedizinischen Anwendungen, der Bild- oder akustischen Signalverarbeitung und dem maschinellen Lernen sind mehrdimensional. Tensoren erhalten die mehrdimensionale Struktur und stellen eine natürliche Darstellung dieser Signale/Daten dar. Darüber hinaus bieten Tensoren oft eine verbesserte Trennbarkeit von enthaltenen Signalkomponenten. Daher profitieren wir von der Verwendung der Tensor-Algebra in den oben genannten Anwendungen und vielen mehr. In dieser Arbeit stellen wir die Vorteile der Nutzung der Tensor-Algebra in zwei Bereichen der Signalverarbeitung vor: drahtlose MIMO (Multiple-Input Multiple-Output) Kommunikationssysteme und biomedizinische Signalverarbeitung. Darüber hinaus tragen wir zu theoretischen Aspekten der Tensor-Algebra bei, indem wir neue Eigenschaften und Berechnungsmethoden für die Tensor-Zerlegung ableiten. Oftmals verfügen wir lediglich über eine elementweise oder ebenenweise Beschreibung des Signalmodells, welche nicht die explizite Tensorstruktur zeigt. Daher ist die Ableitung aller Tensor-Unfoldings nicht offensichtlich, wodurch die multidimensionale Struktur dieser Modelle nicht trivial nutzbar ist. Wir schlagen eine alternative Darstellung der elementweisen Multiplikation oder der ebenenweisen Multiplikation auf der Grundlage des generalisierten Tensor-Kontraktionsoperators vor. Weiterhin nutzen wir diese neuartige Darstellung und deren Eigenschaften zur Ableitung der letztendlichen Tensor-Modelle. Es existieren eine Vielzahl von Tensor-Zerlegungen, die verschiedene Signalmodelle beschreiben, wie die HOSVD (Higher Order Singular Value Decomposition), CP/PARAFAC (Canonical Polyadic/ PARallel FACtors) Zerlegung, die BTD (Block Term Decomposition), die PARATUCK2-(PARAfac und TUCker2) und die PARAFAC2-Zerlegung (PARAllel FACtors2). Dabei ist die CP-Zerlegung am weitesten verbreitet und wird findet in zahlreichen Gebieten Anwendung. Daher ist die Entwicklung von Algorithmen zur effizienten Berechnung der CP-Zerlegung von besonderer Bedeutung. Das SECSI (Semi-Algebraic Framework for approximate CP decomposition via Simultaneaous matrix diagonalization) Framework ist ein effizientes und robustes Werkzeug zur Berechnung der approximierten Low-Rank CP-Zerlegung durch simultane Matrixdiagonalisierung. In dieser Arbeit stellen wir fünf Erweiterungen des SECSI-Frameworks vor, welche die Rechenkomplexität des ursprünglichen Frameworks reduzieren bzw. Einschränkungen für die Faktormatrizen einführen. Darüber hinaus werden die PARAFAC2- und die PARATUCK2-Zerlegung in der Regel mit einer ebenenweisen Notation beschrieben, die sich in Form der allgemeinen Tensor-Kontraktion, wie sie in dieser Arbeit vorgeschlagen wird, ausdrücken lässt. Wir nutzen diese neuartige Darstellung, um explizite Tensormodelle für diese beiden Zerlegungen abzuleiten. Darüber hinaus verwenden wir das PARAFAC2-Modell, um einen ALS-Algorithmus (Alternating Least-Squares) für die Berechnung der PARAFAC2-Zerlegungen abzuleiten. Weiterhin nutzen wir die neuartigen Kontraktionseigenschaften für elementweise und ebenenweise Multiplikationen, um MIMO Multi-Carrier-Mobilfunksysteme zu modellieren. Wir zeigen, dass dieses sehr allgemeine Modell verwendet werden kann, um das Tensor-Modell des empfangenen Signals für MIMO-OFDM- (Multiple- Input Multiple-Output - Orthogonal Frequency Division Multiplexing), Khatri-Rao codierte MIMO-OFDM- und zufällig codierte MIMO-OFDM-Systeme abzuleiten. Wir schlagen die Übertragungstechniken der Khatri-Rao-Kodierung und zufällige Kodierung vor, um eine zusätzliche Tensor-Struktur des Sendesignal-Tensors einzuführen, welcher gewöhnlich keine bestimmte Struktur aufweist. Darüber hinaus zeigen wir, dass dieses Modell auf andere Multi-Carrier-Techniken wie GFDM (Generalized Frequency Division Multiplexing) erweitert werden kann. Unter Verwendung dieser Modelle auf der Empfängerseite entwerfen wir verschiedene Typen von Empfängern für diese Systeme, die die traditionellen matrixbasierten Lösungen in Bezug auf die Symbolfehlerrate übertreffen. Im letzten Teil dieser Arbeit zeigen wir die Vorteile der Verwendung von Tensor-Algebra in der biomedizinischen Signalverarbeitung durch die gemeinsame Zerlegung von EEG-(ElectroEncephaloGraphy) und MEG- (MagnetoEncephaloGraphy) Signalen. Diese werden in der Regel gleichzeitig erfasst, wobei sie gemeinsame Aspekte derselben Gehirnaktivität beschreiben. Daher können EEG- und MEG-Signale mit gekoppelten Tensor-Zerlegungen wie der gekoppelten CP Zerlegung analysiert werden. Wir nutzen das vorgeschlagene gekoppelte SECSI-Framework (eine der vorgeschlagenen Erweiterungen des SECSI-Frameworks) für die Berechnung der gekoppelten CP Zerlegung, um zunächst den photic driving effect zu validieren und zu analysieren. Darüber hinaus validieren wir die Auswirkungen von Schädeldefekten auf die Messsignale von EEG und MEG durch eine gemeinsame EEG-MEG-Zerlegung mit dem gekoppelten SECSI-Framework. Beide Anwendungen zeigen, dass wir von gekoppelten Tensor-Zerlegungen profitieren, wobei die Methoden des gekoppelten SECSI-Frameworks erfolgreich zur Analyse biomedizinischer Daten genutzt werden können

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Advances in independent component analysis and nonnegative matrix factorization

    Get PDF
    A fundamental problem in machine learning research, as well as in many other disciplines, is finding a suitable representation of multivariate data, i.e. random vectors. For reasons of computational and conceptual simplicity, the representation is often sought as a linear transformation of the original data. In other words, each component of the representation is a linear combination of the original variables. Well-known linear transformation methods include principal component analysis (PCA), factor analysis, and projection pursuit. In this thesis, we consider two popular and widely used techniques: independent component analysis (ICA) and nonnegative matrix factorization (NMF). ICA is a statistical method in which the goal is to find a linear representation of nongaussian data so that the components are statistically independent, or as independent as possible. Such a representation seems to capture the essential structure of the data in many applications, including feature extraction and signal separation. Starting from ICA, several methods of estimating the latent structure in different problem settings are derived and presented in this thesis. FastICA as one of most efficient and popular ICA algorithms has been reviewed and discussed. Its local and global convergence and statistical behavior have been further studied. A nonnegative FastICA algorithm is also given in this thesis. Nonnegative matrix factorization is a recently developed technique for finding parts-based, linear representations of non-negative data. It is a method for dimensionality reduction that respects the nonnegativity of the input data while constructing a low-dimensional approximation. The non-negativity constraints make the representation purely additive (allowing no subtractions), in contrast to many other linear representations such as principal component analysis and independent component analysis. A literature survey of Nonnegative matrix factorization is given in this thesis, and a novel method called Projective Nonnegative matrix factorization (P-NMF) and its applications are provided

    Independent Component Analysis Enhancements for Source Separation in Immersive Audio Environments

    Get PDF
    In immersive audio environments with distributed microphones, Independent Component Analysis (ICA) can be applied to uncover signals from a mixture of other signals and noise, such as in a cocktail party recording. ICA algorithms have been developed for instantaneous source mixtures and convolutional source mixtures. While ICA for instantaneous mixtures works when no delays exist between the signals in each mixture, distributed microphone recordings typically result various delays of the signals over the recorded channels. The convolutive ICA algorithm should account for delays; however, it requires many parameters to be set and often has stability issues. This thesis introduces the Channel Aligned FastICA (CAICA), which requires knowledge of the source distance to each microphone, but does not require knowledge of noise sources. Furthermore, the CAICA is combined with Time Frequency Masking (TFM), yielding even better SOI extraction even in low SNR environments. Simulations were conducted for ranking experiments tested the performance of three algorithms: Weighted Beamforming (WB), CAICA, CAICA with TFM. The Closest Microphone (CM) recording is used as a reference for all three. Statistical analyses on the results demonstrated superior performance for the CAICA with TFM. The algorithms were applied to experimental recordings to support the conclusions of the simulations. These techniques can be deployed in mobile platforms, used in surveillance for capturing human speech and potentially adapted to biomedical fields

    Example-based audio editing

    Get PDF
    Traditionally, audio recordings are edited through digital audio workstations (DAWs), which give users access to different tools and parameters through a graphical user interface (GUI) without prior knowledge in coding or signal processing. The complexity of working with DAWs and the undeniable need for strong listening skills have made audio editing unpopular among novice users and time consuming for professionals. We propose an intelligent audio editor (EBAE) that automates major audio editing routines with the use of an example sound and efficiently provides users with high-quality results. EBAE first extracts meaningful information from an example sound that already contains the desired effects and then applies them to a desired recording by employing signal processing and machine learning techniques
    corecore