16,368 research outputs found
MulCNN: An efficient and accurate deep learning method based on gene embedding for cell type identification in single-cell RNA-seq data
Advancements in single-cell sequencing research have revolutionized our understanding of cellular heterogeneity and functional diversity through the analysis of single-cell transcriptomes and genomes. A crucial step in single-cell RNA sequencing (scRNA-seq) analysis is identifying cell types. However, scRNA-seq data are often high dimensional and sparse, and manual cell type identification can be time-consuming, subjective, and lack reproducibility. Consequently, analyzing scRNA-seq data remains a computational challenge. With the increasing availability of well-annotated scRNA-seq datasets, advanced methods are emerging to aid in cell type identification by leveraging this information. Deep learning neural networks have great potential for analyzing single-cell data. This paper proposes MulCNN, a multi-level convolutional neural network that uses a unique cell type-specific gene expression feature extraction method. This method extracts critical features through multi-scale convolution while filtering noise. Extensive testing using datasets from various species and comparisons with popular classification methods show that MulCNN has outstanding performance and offers a new and scalable direction for scRNA-seq analysis
Improved wolf swarm optimization with deep-learning-based movement analysis and self-regulated human activity recognition
A wide variety of applications like patient monitoring, rehabilitation sensing, sports and senior surveillance require a considerable amount of knowledge in recognizing physical activities of a person captured using sensors. The goal of human activity recognition is to identify human activities from a collection of observations based on the behavior of subjects and the surrounding circumstances. Movement is examined in psychology, biomechanics, artificial intelligence and neuroscience. To be specific, the availability of pervasive devices and the low cost to record movements with machine learning (ML) techniques for the automatic and quantitative analysis of movement have resulted in the growth of systems for rehabilitation monitoring, user authentication and medical diagnosis. The self-regulated detection of human activities from time-series smartphone sensor datasets is a growing study area in intelligent and smart healthcare. Deep learning (DL) techniques have shown enhancements compared to conventional ML methods in many fields, which include human activity recognition (HAR). This paper presents an improved wolf swarm optimization with deep learning based movement analysis and self-regulated human activity recognition (IWSODL-MAHAR) technique. The IWSODL-MAHAR method aimed to recognize various kinds of human activities. Since high dimensionality poses a major issue in HAR, the IWSO algorithm is applied as a dimensionality reduction technique. In addition, the IWSODL-MAHAR technique uses a hybrid DL model for activity recognition. To further improve the recognition performance, a Nadam optimizer is applied as a hyperparameter tuning technique. The experimental evaluation of the IWSODL-MAHAR approach is assessed on benchmark activity recognition data. The experimental outcomes outlined the supremacy of the IWSODL-MAHAR algorithm compared to recent models
Time as a supervisor: temporal regularity and auditory object learning
Sensory systems appear to learn to transform incoming sensory information into perceptual representations, or “objects,” that can inform and guide behavior with minimal explicit supervision. Here, we propose that the auditory system can achieve this goal by using time as a supervisor, i.e., by learning features of a stimulus that are temporally regular. We will show that this procedure generates a feature space sufficient to support fundamental computations of auditory perception. In detail, we consider the problem of discriminating between instances of a prototypical class of natural auditory objects, i.e., rhesus macaque vocalizations. We test discrimination in two ethologically relevant tasks: discrimination in a cluttered acoustic background and generalization to discriminate between novel exemplars. We show that an algorithm that learns these temporally regular features affords better or equivalent discrimination and generalization than conventional feature-selection algorithms, i.e., principal component analysis and independent component analysis. Our findings suggest that the slow temporal features of auditory stimuli may be sufficient for parsing auditory scenes and that the auditory brain could utilize these slowly changing temporal features
Colour technologies for content production and distribution of broadcast content
The requirement of colour reproduction has long been a priority driving the development of new colour imaging systems that maximise human perceptual plausibility. This thesis explores machine learning algorithms for colour processing to assist both content production and distribution. First, this research studies colourisation technologies with practical use cases in restoration and processing of archived content. The research targets practical deployable solutions, developing a cost-effective pipeline which integrates the activity of the producer into the processing workflow. In particular, a fully automatic image colourisation paradigm using Conditional GANs is proposed to improve content generalisation and colourfulness of existing baselines. Moreover, a more conservative solution is considered by providing references to guide the system towards more accurate colour predictions. A fast-end-to-end architecture is proposed to improve existing exemplar-based image colourisation methods while decreasing the complexity and runtime. Finally, the proposed image-based methods are integrated into a video colourisation pipeline. A general framework is proposed to reduce the generation of temporal flickering or propagation of errors when such methods are applied frame-to-frame. The proposed model is jointly trained to stabilise the input video and to cluster their frames with the aim of learning scene-specific modes. Second, this research explored colour processing technologies for content distribution with the aim to effectively deliver the processed content to the broad audience. In particular, video compression is tackled by introducing a novel methodology for chroma intra prediction based on attention models. Although the proposed architecture helped to gain control over the reference samples and better understand the prediction process, the complexity of the underlying neural network significantly increased the encoding and decoding time. Therefore, aiming at efficient deployment within the latest video coding standards, this work also focused on the simplification of the proposed architecture to obtain a more compact and explainable model
Genomic prediction in plants: opportunities for ensemble machine learning based approaches [version 2; peer review: 1 approved, 2 approved with reservations]
Background: Many studies have demonstrated the utility of machine learning (ML) methods for genomic prediction (GP) of various plant traits, but a clear rationale for choosing ML over conventionally used, often simpler parametric methods, is still lacking. Predictive performance of GP models might depend on a plethora of factors including sample size, number of markers, population structure and genetic architecture. Methods: Here, we investigate which problem and dataset characteristics are related to good performance of ML methods for genomic prediction. We compare the predictive performance of two frequently used ensemble ML methods (Random Forest and Extreme Gradient Boosting) with parametric methods including genomic best linear unbiased prediction (GBLUP), reproducing kernel Hilbert space regression (RKHS), BayesA and BayesB. To explore problem characteristics, we use simulated and real plant traits under different genetic complexity levels determined by the number of Quantitative Trait Loci (QTLs), heritability (h2 and h2e), population structure and linkage disequilibrium between causal nucleotides and other SNPs. Results: Decision tree based ensemble ML methods are a better choice for nonlinear phenotypes and are comparable to Bayesian methods for linear phenotypes in the case of large effect Quantitative Trait Nucleotides (QTNs). Furthermore, we find that ML methods are susceptible to confounding due to population structure but less sensitive to low linkage disequilibrium than linear parametric methods. Conclusions: Overall, this provides insights into the role of ML in GP as well as guidelines for practitioners
Statistical phase estimation and error mitigation on a superconducting quantum processor
Quantum phase estimation (QPE) is a key quantum algorithm, which has been
widely studied as a method to perform chemistry and solid-state calculations on
future fault-tolerant quantum computers. Recently, several authors have
proposed statistical alternatives to QPE that have benefits on early
fault-tolerant devices, including shorter circuits and better suitability for
error mitigation techniques. However, practical implementations of the
algorithm on real quantum processors are lacking. In this paper we practically
implement statistical phase estimation on Rigetti's superconducting processors.
We specifically use the method of Lin and Tong [PRX Quantum 3, 010318 (2022)]
using the improved Fourier approximation of Wan et al. [PRL 129, 030503
(2022)], and applying a variational compilation technique to reduce circuit
depth. We then incorporate error mitigation strategies including zero-noise
extrapolation and readout error mitigation with bit-flip averaging. We propose
a simple method to estimate energies from the statistical phase estimation
data, which is found to improve the accuracy in final energy estimates by one
to two orders of magnitude with respect to prior theoretical bounds, reducing
the cost to perform accurate phase estimation calculations. We apply these
methods to chemistry problems for active spaces up to 4 electrons in 4
orbitals, including the application of a quantum embedding method, and use them
to correctly estimate energies within chemical precision. Our work demonstrates
that statistical phase estimation has a natural resilience to noise,
particularly after mitigating coherent errors, and can achieve far higher
accuracy than suggested by previous analysis, demonstrating its potential as a
valuable quantum algorithm for early fault-tolerant devices.Comment: 24 pages, 13 figure
LMDA-Net:A lightweight multi-dimensional attention network for general EEG-based brain-computer interface paradigms and interpretability
EEG-based recognition of activities and states involves the use of prior
neuroscience knowledge to generate quantitative EEG features, which may limit
BCI performance. Although neural network-based methods can effectively extract
features, they often encounter issues such as poor generalization across
datasets, high predicting volatility, and low model interpretability. Hence, we
propose a novel lightweight multi-dimensional attention network, called
LMDA-Net. By incorporating two novel attention modules designed specifically
for EEG signals, the channel attention module and the depth attention module,
LMDA-Net can effectively integrate features from multiple dimensions, resulting
in improved classification performance across various BCI tasks. LMDA-Net was
evaluated on four high-impact public datasets, including motor imagery (MI) and
P300-Speller paradigms, and was compared with other representative models. The
experimental results demonstrate that LMDA-Net outperforms other representative
methods in terms of classification accuracy and predicting volatility,
achieving the highest accuracy in all datasets within 300 training epochs.
Ablation experiments further confirm the effectiveness of the channel attention
module and the depth attention module. To facilitate an in-depth understanding
of the features extracted by LMDA-Net, we propose class-specific neural network
feature interpretability algorithms that are suitable for event-related
potentials (ERPs) and event-related desynchronization/synchronization
(ERD/ERS). By mapping the output of the specific layer of LMDA-Net to the time
or spatial domain through class activation maps, the resulting feature
visualizations can provide interpretable analysis and establish connections
with EEG time-spatial analysis in neuroscience. In summary, LMDA-Net shows
great potential as a general online decoding model for various EEG tasks.Comment: 20 pages, 7 Figure
Decoding spatial location of attended audio-visual stimulus with EEG and fNIRS
When analyzing complex scenes, humans often focus their attention on an object at a particular spatial location in the presence of background noises and irrelevant visual objects. The ability to decode the attended spatial location would facilitate brain computer interfaces (BCI) for complex scene analysis. Here, we tested two different neuroimaging technologies and investigated their capability to decode audio-visual spatial attention in the presence of competing stimuli from multiple locations. For functional near-infrared spectroscopy (fNIRS), we targeted dorsal frontoparietal network including frontal eye field (FEF) and intra-parietal sulcus (IPS) as well as superior temporal gyrus/planum temporal (STG/PT). They all were shown in previous functional magnetic resonance imaging (fMRI) studies to be activated by auditory, visual, or audio-visual spatial tasks. We found that fNIRS provides robust decoding of attended spatial locations for most participants and correlates with behavioral performance. Moreover, we found that FEF makes a large contribution to decoding performance. Surprisingly, the performance was significantly above chance level 1s after cue onset, which is well before the peak of the fNIRS response.
For electroencephalography (EEG), while there are several successful EEG-based algorithms, to date, all of them focused exclusively on auditory modality where eye-related artifacts are minimized or controlled. Successful integration into a more ecological typical usage requires careful consideration for eye-related artifacts which are inevitable. We showed that fast and reliable decoding can be done with or without ocular-removal algorithm. Our results show that EEG and fNIRS are promising platforms for compact, wearable technologies that could be applied to decode attended spatial location and reveal contributions of specific brain regions during complex scene analysis
Discovering the hidden structure of financial markets through bayesian modelling
Understanding what is driving the price of a financial asset is a question that is currently mostly unanswered. In this work we go beyond the classic one step ahead prediction and instead construct models that create new information on the behaviour of these time series. Our aim is to get a better understanding of the hidden structures that drive the moves of each financial time series and thus the market as a whole.
We propose a tool to decompose multiple time series into economically-meaningful variables to explain the endogenous and exogenous factors driving their underlying variability. The methodology we introduce goes beyond the direct model forecast. Indeed, since our model continuously adapts its variables and coefficients, we can study the time series of coefficients and selected variables. We also present a model to construct the causal graph of relations between these time series and include them in the exogenous factors.
Hence, we obtain a model able to explain what is driving the move of both each specific time series and the market as a whole. In addition, the obtained graph of the time series provides new information on the underlying risk structure of this environment. With this deeper understanding of the hidden structure we propose novel ways to detect and forecast risks in the market. We investigate our results with inferences up to one month into the future using stocks, FX futures and ETF futures, demonstrating its superior performance according to accuracy of large moves, longer-term prediction and consistency over time. We also go in more details on the economic interpretation of the new variables and discuss the created graph structure of the market.Open Acces
- …