6,965 research outputs found
Physics-based passivity-preserving parameterized model order reduction for PEEC circuit analysis
The decrease of integrated circuit feature size and the increase of operating frequencies require 3-D electromagnetic methods, such as the partial element equivalent circuit (PEEC) method, for the analysis and design of high-speed circuits. Very large systems of equations are often produced by 3-D electromagnetic methods, and model order reduction (MOR) methods have proven to be very effective in combating such high complexity. During the circuit synthesis of large-scale digital or analog applications, it is important to predict the response of the circuit under study as a function of design parameters such as geometrical and substrate features. Traditional MOR techniques perform order reduction only with respect to frequency, and therefore the computation of a new electromagnetic model and the corresponding reduced model are needed each time a design parameter is modified, reducing the CPU efficiency. Parameterized model order reduction (PMOR) methods become necessary to reduce large systems of equations with respect to frequency and other design parameters of the circuit, such as geometrical layout or substrate characteristics. We propose a novel PMOR technique applicable to PEEC analysis which is based on a parameterization process of matrices generated by the PEEC method and the projection subspace generated by a passivity-preserving MOR method. The proposed PMOR technique guarantees overall stability and passivity of parameterized reduced order models over a user-defined range of design parameter values. Pertinent numerical examples validate the proposed PMOR approach
Parametric t-Distributed Stochastic Exemplar-centered Embedding
Parametric embedding methods such as parametric t-SNE (pt-SNE) have been
widely adopted for data visualization and out-of-sample data embedding without
further computationally expensive optimization or approximation. However, the
performance of pt-SNE is highly sensitive to the hyper-parameter batch size due
to conflicting optimization goals, and often produces dramatically different
embeddings with different choices of user-defined perplexities. To effectively
solve these issues, we present parametric t-distributed stochastic
exemplar-centered embedding methods. Our strategy learns embedding parameters
by comparing given data only with precomputed exemplars, resulting in a cost
function with linear computational and memory complexity, which is further
reduced by noise contrastive samples. Moreover, we propose a shallow embedding
network with high-order feature interactions for data visualization, which is
much easier to tune but produces comparable performance in contrast to a deep
neural network employed by pt-SNE. We empirically demonstrate, using several
benchmark datasets, that our proposed methods significantly outperform pt-SNE
in terms of robustness, visual effects, and quantitative evaluations.Comment: fixed typo
HyperNP: Interactive Visual Exploration of Multidimensional Projection Hyperparameters
Projection algorithms such as t-SNE or UMAP are useful for the visualization
of high dimensional data, but depend on hyperparameters which must be tuned
carefully. Unfortunately, iteratively recomputing projections to find the
optimal hyperparameter value is computationally intensive and unintuitive due
to the stochastic nature of these methods. In this paper we propose HyperNP, a
scalable method that allows for real-time interactive hyperparameter
exploration of projection methods by training neural network approximations.
HyperNP can be trained on a fraction of the total data instances and
hyperparameter configurations and can compute projections for new data and
hyperparameters at interactive speeds. HyperNP is compact in size and fast to
compute, thus allowing it to be embedded in lightweight visualization systems
such as web browsers. We evaluate the performance of the HyperNP across three
datasets in terms of performance and speed. The results suggest that HyperNP is
accurate, scalable, interactive, and appropriate for use in real-world
settings
Learning Representations from EEG with Deep Recurrent-Convolutional Neural Networks
One of the challenges in modeling cognitive events from electroencephalogram
(EEG) data is finding representations that are invariant to inter- and
intra-subject differences, as well as to inherent noise associated with such
data. Herein, we propose a novel approach for learning such representations
from multi-channel EEG time-series, and demonstrate its advantages in the
context of mental load classification task. First, we transform EEG activities
into a sequence of topology-preserving multi-spectral images, as opposed to
standard EEG analysis techniques that ignore such spatial information. Next, we
train a deep recurrent-convolutional network inspired by state-of-the-art video
classification to learn robust representations from the sequence of images. The
proposed approach is designed to preserve the spatial, spectral, and temporal
structure of EEG which leads to finding features that are less sensitive to
variations and distortions within each dimension. Empirical evaluation on the
cognitive load classification task demonstrated significant improvements in
classification accuracy over current state-of-the-art approaches in this field.Comment: To be published as a conference paper at ICLR 201
- …