1,079,101 research outputs found
Tensor Regression Networks
Convolutional neural networks typically consist of many convolutional layers
followed by one or more fully connected layers. While convolutional layers map
between high-order activation tensors, the fully connected layers operate on
flattened activation vectors. Despite empirical success, this approach has
notable drawbacks. Flattening followed by fully connected layers discards
multilinear structure in the activations and requires many parameters. We
address these problems by incorporating tensor algebraic operations that
preserve multilinear structure at every layer. First, we introduce Tensor
Contraction Layers (TCLs) that reduce the dimensionality of their input while
preserving their multilinear structure using tensor contraction. Next, we
introduce Tensor Regression Layers (TRLs), which express outputs through a
low-rank multilinear mapping from a high-order activation tensor to an output
tensor of arbitrary order. We learn the contraction and regression factors
end-to-end, and produce accurate nets with fewer parameters. Additionally, our
layers regularize networks by imposing low-rank constraints on the activations
(TCL) and regression weights (TRL). Experiments on ImageNet show that, applied
to VGG and ResNet architectures, TCLs and TRLs reduce the number of parameters
compared to fully connected layers by more than 65% while maintaining or
increasing accuracy. In addition to the space savings, our approach's ability
to leverage topological structure can be crucial for structured data such as
MRI. In particular, we demonstrate significant performance improvements over
comparable architectures on three tasks associated with the UK Biobank dataset
Target Set Selection Parameterized by Clique-Width and Maximum Threshold
The Target Set Selection problem takes as an input a graph and a
non-negative integer threshold \mbox{thr}(v) for every vertex . A vertex
can get active as soon as at least \mbox{thr}(v) of its neighbors have
been activated. The objective is to select a smallest possible initial set of
vertices, the target set, whose activation eventually leads to the activation
of all vertices in the graph.
We show that Target Set Selection is in FPT when parameterized with the
combined parameters clique-width of the graph and the maximum threshold value.
This generalizes all previous FPT-membership results for the parameterization
by maximum threshold, and thereby solves an open question from the literature.
We stress that the time complexity of our algorithm is surprisingly
well-behaved and grows only single-exponentially in the parameters
Both Ligand- and Cell-Specific Parameters Control Ligand Agonism in a Kinetic Model of G Protein–Coupled Receptor Signaling
G protein–coupled receptors (GPCRs) exist in multiple dynamic states (e.g., ligand-bound, inactive, G protein–coupled) that influence G protein activation and ultimately response generation. In quantitative models of GPCR signaling that incorporate these varied states, parameter values are often uncharacterized or varied over large ranges, making identification of important parameters and signaling outcomes difficult to intuit. Here we identify the ligand- and cell-specific parameters that are important determinants of cell-response behavior in a dynamic model of GPCR signaling using parameter variation and sensitivity analysis. The character of response (i.e., positive/neutral/inverse agonism) is, not surprisingly, significantly influenced by a ligand's ability to bias the receptor into an active conformation. We also find that several cell-specific parameters, including the ratio of active to inactive receptor species, the rate constant for G protein activation, and expression levels of receptors and G proteins also dramatically influence agonism. Expressing either receptor or G protein in numbers several fold above or below endogenous levels may result in system behavior inconsistent with that measured in endogenous systems. Finally, small variations in cell-specific parameters identified by sensitivity analysis as significant determinants of response behavior are found to change ligand-induced responses from positive to negative, a phenomenon termed protean agonism. Our findings offer an explanation for protean agonism reported in β2-adrenergic and α2A-adrenergic receptor systems
Correlation between IL1β expression level and morphological parameters proves the usefulness of morphology measures to predict the degree of activation of microglial cells
It is well known that microglial cells undergo an important change in morphology upon activation, so that form and function are intimately related. Upon activation, microglia cell body enlarges, its ramifications shortens and become thicker. In parallel, a variety of cytokines and inflammatory mediators such as IL1β are released. However the activation process is not all-or-nothing. Rather, cells in subtle activation states or in a deactivation process can occur, so intermediate not obvious phenotypes may appear.
Thus, we aimed to correlate the expression level of a well-defined marker of activation, IL1β, with different morphological parameters. To do so, we used an intracerebroventricular injection of neuraminidase to produce an acute inflammation in rats. Brain sections were double-stained with IBA1 to have an image of the whole cell and its ramifications, and with IL1β to assess the level of activation. Images were captured from septofimbria (close to the injection site) and from the hypothalamus. A ratio of IL1β-positive pixels to IBA1-positive pixels was used to estimate the level of IL1β expression for each cell. Single microglial cell images were processed with ImageJ software to obtain outlined and filled shapes, which were used to obtain (by means of FracLac plug in) the following morphological parameters: fractal dimension, lacunarity, area, density and perimeter.
All parameters showed a significant correlation with the level of expression of IL1β. This occurred in cells sampled from the two brain areas studied. Density, lacunarity and perimeter resulted as the best predictor parameters of activation, that is, those with a better correlation with the level of expression of IL1β. Area, an extensively used parameter to assess microglial activation, presented the least significant correlation.
Thus, objectively measured morphological parameters correlate with the level of expression of IL1β, and could therefore be used as predictors of the activation level of microglial cells.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
Deep Learning with S-shaped Rectified Linear Activation Units
Rectified linear activation units are important components for
state-of-the-art deep convolutional networks. In this paper, we propose a novel
S-shaped rectified linear activation unit (SReLU) to learn both convex and
non-convex functions, imitating the multiple function forms given by the two
fundamental laws, namely the Webner-Fechner law and the Stevens law, in
psychophysics and neural sciences. Specifically, SReLU consists of three
piecewise linear functions, which are formulated by four learnable parameters.
The SReLU is learned jointly with the training of the whole deep network
through back propagation. During the training phase, to initialize SReLU in
different layers, we propose a "freezing" method to degenerate SReLU into a
predefined leaky rectified linear unit in the initial several training epochs
and then adaptively learn the good initial values. SReLU can be universally
used in the existing deep networks with negligible additional parameters and
computation cost. Experiments with two popular CNN architectures, Network in
Network and GoogLeNet on scale-various benchmarks including CIFAR10, CIFAR100,
MNIST and ImageNet demonstrate that SReLU achieves remarkable improvement
compared to other activation functions.Comment: Accepted by AAAI-1
Modelling of polymer photodegradation for solar cell modules
The photooxidation process was modelled with input data consisting of Arrhenius parameters A (the preexponential factor) and E (the activation energy)
- …
