354 research outputs found
Deep Learning in Single-Cell Analysis
Single-cell technologies are revolutionizing the entire field of biology. The
large volumes of data generated by single-cell technologies are
high-dimensional, sparse, heterogeneous, and have complicated dependency
structures, making analyses using conventional machine learning approaches
challenging and impractical. In tackling these challenges, deep learning often
demonstrates superior performance compared to traditional machine learning
methods. In this work, we give a comprehensive survey on deep learning in
single-cell analysis. We first introduce background on single-cell technologies
and their development, as well as fundamental concepts of deep learning
including the most popular deep architectures. We present an overview of the
single-cell analytic pipeline pursued in research applications while noting
divergences due to data sources or specific applications. We then review seven
popular tasks spanning through different stages of the single-cell analysis
pipeline, including multimodal integration, imputation, clustering, spatial
domain identification, cell-type deconvolution, cell segmentation, and
cell-type annotation. Under each task, we describe the most recent developments
in classical and deep learning methods and discuss their advantages and
disadvantages. Deep learning tools and benchmark datasets are also summarized
for each task. Finally, we discuss the future directions and the most recent
challenges. This survey will serve as a reference for biologists and computer
scientists, encouraging collaborations.Comment: 77 pages, 11 figures, 15 tables, deep learning, single-cell analysi
Semiparametric Regression During 2003â2007
Semiparametric regression is a fusion between parametric regression and nonparametric regression and the title of a book that we published on the topic in early 2003. We review developments in the field during the five year period since the book was written. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application
Data-Driven Image Restoration
Every day many images are taken by digital cameras, and people
are demanding visually accurate and pleasing result. Noise and
blur degrade images captured by modern cameras, and high-level
vision tasks (such as segmentation, recognition, and tracking)
require high-quality images. Therefore, image restoration
specifically, image
deblurring and image denoising is a critical preprocessing step.
A fundamental problem in image deblurring is to recover reliably
distinct spatial frequencies that have been suppressed by the
blur kernel. Existing image deblurring techniques often rely on
generic image priors that only help recover part of the frequency
spectrum, such as the frequencies near the high-end. To this end,
we pose the following specific questions: (i) Does class-specific
information offer an advantage over existing generic priors for
image quality restoration? (ii) If a class-specific prior exists,
how should it be encoded into a deblurring framework to recover
attenuated image frequencies? Throughout this work, we devise a
class-specific prior based on the band-pass filter responses and
incorporate it into a deblurring strategy. Specifically, we show
that the subspace of band-pass filtered images and their
intensity distributions serve as useful priors for recovering
image frequencies.
Next, we present a novel image denoising algorithm that uses
external, category specific image database. In contrast to
existing noisy image restoration algorithms, our method selects
clean image âsupport patchesâ similar to the noisy patch from
an external database. We employ a content adaptive distribution
model for each patch where we derive the parameters of the
distribution from the support patches. Our objective function
composed of a Gaussian fidelity term that imposes category
specific information, and a low-rank term that encourages the
similarity between the noisy and the support patches in a robust
manner.
Finally, we propose to learn a fully-convolutional network model
that consists of a Chain of Identity Mapping Modules (CIMM) for
image denoising. The CIMM structure possesses two distinctive
features that are important for the noise removal task. Firstly,
each residual unit employs identity mappings as the skip
connections and receives pre-activated input to preserve the
gradient magnitude propagated in both the forward and backward
directions. Secondly, by utilizing dilated kernels for the
convolution layers in the residual branch, each neuron in the
last convolution layer of each module can observe the full
receptive field of the first layer
Automatic music transcription: challenges and future directions
Automatic music transcription is considered by many to be a key enabling technology in music signal processing. However, the performance of transcription systems is still significantly below that of a human expert, and accuracies reported in recent years seem to have reached a limit, although the field is still very active. In this paper we analyse limitations of current methods and identify promising directions for future research. Current transcription methods use general purpose models which are unable to capture the rich diversity found in music signals. One way to overcome the limited performance of transcription systems is to tailor algorithms to specific use-cases. Semi-automatic approaches are another way of achieving a more reliable transcription. Also, the wealth of musical scores and corresponding audio data now available are a rich potential source of training data, via forced alignment of audio to scores, but large scale utilisation of such data has yet to be attempted. Other promising approaches include the integration of information from multiple algorithms and different musical aspects
Advances in independent component analysis with applications to data mining
This thesis considers the problem of finding latent structure in high dimensional data. It is assumed that the observed data are generated by unknown latent variables and their interactions. The task is to find these latent variables and the way they interact, given the observed data only. It is assumed that the latent variables do not depend on each other but act independently.
A popular method for solving the above problem is independent component analysis (ICA). It is a statistical method for expressing a set of multidimensional observations as a combination of unknown latent variables that are statistically independent of each other. Starting from ICA, several methods of estimating the latent structure in different problem settings are derived and presented in this thesis. An ICA algorithm for analyzing complex valued signals is given; a way of using ICA in the context of regression is discussed; and an ICA-type algorithm is used for analyzing the topics in dynamically changing text data. In addition to ICA-type methods, two algorithms are given for estimating the latent structure in binary valued data. Experimental results are given on all of the presented methods.
Another, partially overlapping problem considered in this thesis is dimensionality reduction. Empirical validation is given on a computationally simple method called random projection: it does not introduce severe distortions in the data. It is also proposed that random projection could be used as a preprocessing method prior to ICA, and experimental results are shown to support this claim.
This thesis also contains several literature surveys on various aspects of finding the latent structure in high dimensional data.reviewe
Trennung und SchĂ€tzung der Anzahl von Audiosignalquellen mit Zeit- und FrequenzĂŒberlappung
Everyday audio recordings involve mixture signals: music contains a mixture of instruments; in a meeting or conference, there is a mixture of human voices. For these mixtures, automatically separating or estimating the number of sources is a challenging task. A common assumption when processing mixtures in the time-frequency domain is that sources are not fully overlapped. However, in this work we consider some cases where the overlap is severe â for instance, when instruments play the same note (unison) or when many people speak concurrently ("cocktail party") â highlighting the need for new representations and more powerful models.
To address the problems of source separation and count estimation, we use conventional signal processing techniques as well as deep neural networks (DNN). We ïŹrst address the source separation problem for unison instrument mixtures, studying the distinct spectro-temporal modulations caused by vibrato. To exploit these modulations, we developed a method based on time warping, informed by an estimate of the fundamental frequency. For cases where such estimates are not available, we present an unsupervised model, inspired by the way humans group time-varying sources (common fate). This contribution comes with a novel representation that improves separation for overlapped and modulated sources on unison mixtures but also improves vocal and accompaniment separation when used as an input for a DNN model.
Then, we focus on estimating the number of sources in a mixture, which is important for real-world scenarios. Our work on count estimation was motivated by a study on how humans can address this task, which lead us to conduct listening experiments, conïŹrming that humans are only able to estimate the number of up to four sources correctly. To answer the question of whether machines can perform similarly, we present a DNN architecture, trained to estimate the number of concurrent speakers. Our results show improvements compared to other methods, and the model even outperformed humans on the same task.
In both the source separation and source count estimation tasks, the key contribution of this thesis is the concept of âmodulationâ, which is important to computationally mimic human performance. Our proposed Common Fate Transform is an adequate representation to disentangle overlapping signals for separation, and an inspection of our DNN count estimation model revealed that it proceeds to ïŹnd modulation-like intermediate features.Im Alltag sind wir von gemischten Signalen umgeben: Musik besteht aus einer Mischung von Instrumenten; in einem Meeting oder auf einer Konferenz sind wir einer Mischung menschlicher Stimmen ausgesetzt. FĂŒr diese Mischungen ist die automatische Quellentrennung oder die Bestimmung der Anzahl an Quellen eine anspruchsvolle Aufgabe. Eine hĂ€uïŹge Annahme bei der Verarbeitung von gemischten Signalen im Zeit-Frequenzbereich ist, dass die Quellen sich nicht vollstĂ€ndig ĂŒberlappen. In dieser Arbeit betrachten wir jedoch einige FĂ€lle, in denen die Ăberlappung immens ist zum Beispiel, wenn Instrumente den gleichen Ton spielen (unisono) oder wenn viele Menschen gleichzeitig sprechen (Cocktailparty) â, so dass neue Signal-ReprĂ€sentationen und leistungsfĂ€higere Modelle notwendig sind.
Um die zwei genannten Probleme zu bewĂ€ltigen, verwenden wir sowohl konventionelle Signalverbeitungsmethoden als auch tiefgehende neuronale Netze (DNN). Wir gehen zunĂ€chst auf das Problem der Quellentrennung fĂŒr Unisono-Instrumentenmischungen ein und untersuchen die speziellen, durch Vibrato ausgelösten, zeitlich-spektralen Modulationen. Um diese Modulationen auszunutzen entwickelten wir eine Methode, die auf Zeitverzerrung basiert und eine SchĂ€tzung der Grundfrequenz als zusĂ€tzliche Information nutzt. FĂŒr FĂ€lle, in denen diese SchĂ€tzungen nicht verfĂŒgbar sind, stellen wir ein unĂŒberwachtes Modell vor, das inspiriert ist von der Art und Weise, wie Menschen zeitverĂ€nderliche Quellen gruppieren (Common Fate). Dieser Beitrag enthĂ€lt eine neuartige ReprĂ€sentation, die die Separierbarkeit fĂŒr ĂŒberlappte und modulierte Quellen in Unisono-Mischungen erhöht, aber auch die Trennung in Gesang und Begleitung verbessert, wenn sie in einem DNN-Modell verwendet wird.
Im Weiteren beschĂ€ftigen wir uns mit der SchĂ€tzung der Anzahl von Quellen in einer Mischung, was fĂŒr reale Szenarien wichtig ist. Unsere Arbeit an der SchĂ€tzung der Anzahl war motiviert durch eine Studie, die zeigt, wie wir Menschen diese Aufgabe angehen. Dies hat uns dazu veranlasst, eigene Hörexperimente durchzufĂŒhren, die bestĂ€tigten, dass Menschen nur in der Lage sind, die Anzahl von bis zu vier Quellen korrekt abzuschĂ€tzen. Um nun die Frage zu beantworten, ob Maschinen dies Ă€hnlich gut können, stellen wir eine DNN-Architektur vor, die erlernt hat, die Anzahl der gleichzeitig sprechenden Sprecher zu ermitteln. Die Ergebnisse zeigen Verbesserungen im Vergleich zu anderen Methoden, aber vor allem auch im Vergleich zu menschlichen Hörern.
Sowohl bei der Quellentrennung als auch bei der SchĂ€tzung der Anzahl an Quellen ist ein Kernbeitrag dieser Arbeit das Konzept der âModulationâ, welches wichtig ist, um die Strategien von Menschen mittels Computern nachzuahmen. Unsere vorgeschlagene Common Fate Transformation ist eine adĂ€quate Darstellung, um die Ăberlappung von Signalen fĂŒr die Trennung zugĂ€nglich zu machen und eine Inspektion unseres DNN-ZĂ€hlmodells ergab schlieĂlich, dass sich auch hier modulationsĂ€hnliche Merkmale ïŹnden lassen
Independent component analysis (ICA) applied to ultrasound image processing and tissue characterization
As a complicated ubiquitous phenomenon encountered in ultrasound imaging, speckle can be treated as either annoying noise that needs to be reduced or the source from which diagnostic information can be extracted to reveal the underlying properties of tissue. In this study, the application of Independent Component Analysis (ICA), a relatively new statistical signal processing tool appeared in recent years, to both the speckle texture analysis and despeckling problems of B-mode ultrasound images was investigated. It is believed that higher order statistics may provide extra information about the speckle texture beyond the information provided by first and second order statistics only. However, the higher order statistics of speckle texture is still not clearly understood and very difficult to model analytically. Any direct dealing with high order statistics is computationally forbidding. On the one hand, many conventional ultrasound speckle texture analysis algorithms use only first or second order statistics. On the other hand, many multichannel filtering approaches use pre-defined analytical filters which are not adaptive to the data. In this study, an ICA-based multichannel filtering texture analysis algorithm, which considers both higher order statistics and data adaptation, was proposed and tested on the numerically simulated homogeneous speckle textures. The ICA filters were learned directly from the training images. Histogram regularization was conducted to make the speckle images quasi-stationary in the wide sense so as to be adaptive to an ICA algorithm. Both Principal Component Analysis (PCA) and a greedy algorithm were used to reduce the dimension of feature space. Finally, Support Vector Machines (SVM) with Radial Basis Function (RBF) kernel were chosen as the classifier for achieving best classification accuracy. Several representative conventional methods, including both low and high order statistics based methods, and both filtering and non-filtering methods, have been chosen for comparison study. The numerical experiments have shown that the proposed ICA-based algorithm in many cases outperforms other algorithms for comparison. Two-component texture segmentation experiments were conducted and the proposed algorithm showed strong capability of segmenting two visually very similar yet different texture regions with rather fuzzy boundaries and almost the same mean and variance. Through simulating speckle with first order statistics approaching gradually to the Rayleigh model from different non-Rayleigh models, the experiments to some extent reveal how the behavior of higher order statistics changes with the underlying property of tissues. It has been demonstrated that when the speckle approaches the Rayleigh model, both the second and higher order statistics lose the texture differentiation capability. However, when the speckles tend to some non-Rayleigh models, methods based on higher order statistics show strong advantage over those solely based on first or second order statistics. The proposed algorithm may potentially find clinical application in the early detection of soft tissue disease, and also be helpful for better understanding ultrasound speckle phenomenon in the perspective of higher order statistics. For the despeckling problem, an algorithm was proposed which adapted the ICA Sparse Code Shrinkage (ICA-SCS) method for the ultrasound B-mode image despeckling problem by applying an appropriate preprocessing step proposed by other researchers. The preprocessing step makes the speckle noise much closer to the real white Gaussian noise (WGN) hence more amenable to a denoising algorithm such as ICS-SCS that has been strictly designed for additive WGN. A discussion is given on how to obtain the noise-free training image samples in various ways. The experimental results have shown that the proposed method outperforms several classical methods chosen for comparison, including first or second order statistics based methods (such as Wiener filter) and multichannel filtering methods (such as wavelet shrinkage), in the capability of both speckle reduction and edge preservation
- âŠ