12 research outputs found
Neuronal Cell Type Classification using Deep Learning
The brain is likely the most complex organ, given the variety of functions it
controls, the number of cells it comprises, and their corresponding diversity.
Studying and identifying neurons, the brain's primary building blocks, is a
crucial milestone and essential for understanding brain function in health and
disease. Recent developments in machine learning have provided advanced
abilities for classifying neurons. However, these methods remain black boxes
with no explainability and reasoning. This paper aims to provide a robust and
explainable deep-learning framework to classify neurons based on their
electrophysiological activity. Our analysis is performed on data provided by
the Allen Cell Types database containing a survey of biological features
derived from single-cell recordings of mice and humans. First, we classify
neuronal cell types of mice data to identify excitatory and inhibitory neurons.
Then, neurons are categorized to their broad types in humans using domain
adaptation from mice data. Lastly, neurons are classified into sub-types based
on transgenic mouse lines using deep neural networks in an explainable fashion.
We show state-of-the-art results in a dendrite-type classification of
excitatory vs. inhibitory neurons and transgenic mouse lines classification.
The model is also inherently interpretable, revealing the correlations between
neuronal types and their electrophysiological properties
TabADM: Unsupervised Tabular Anomaly Detection with Diffusion Models
Tables are an abundant form of data with use cases across all scientific
fields. Real-world datasets often contain anomalous samples that can negatively
affect downstream analysis. In this work, we only assume access to contaminated
data and present a diffusion-based probabilistic model effective for
unsupervised anomaly detection. Our model is trained to learn the density of
normal samples by utilizing a unique rejection scheme to attenuate the
influence of anomalies on the density estimation. At inference, we identify
anomalies as samples in low-density regions. We use real data to demonstrate
that our method improves detection capabilities over baselines. Furthermore,
our method is relatively stable to the dimension of the data and does not
require extensive hyperparameter tuning
Anomaly Detection with Variance Stabilized Density Estimation
Density estimation based anomaly detection schemes typically model anomalies
as examples that reside in low-density regions. We propose a modified density
estimation problem and demonstrate its effectiveness for anomaly detection.
Specifically, we assume the density function of normal samples is uniform in
some compact domain. This assumption implies the density function is more
stable (with lower variance) around normal samples than anomalies. We first
corroborate this assumption empirically using a wide range of real-world data.
Then, we design a variance stabilized density estimation problem for maximizing
the likelihood of the observed samples while minimizing the variance of the
density around normal samples. We introduce an ensemble of autoregressive
models to learn the variance stabilized distribution. Finally, we perform an
extensive benchmark with 52 datasets demonstrating that our method leads to
state-of-the-art results while alleviating the need for data-specific
hyperparameter tuning.Comment: 12 pages, 6 figure