14,827 research outputs found
Study of noise effects in electrical impedance tomography with resistor networks
We present a study of the numerical solution of the two dimensional
electrical impedance tomography problem, with noisy measurements of the
Dirichlet to Neumann map. The inversion uses parametrizations of the
conductivity on optimal grids. The grids are optimal in the sense that finite
volume discretizations on them give spectrally accurate approximations of the
Dirichlet to Neumann map. The approximations are Dirichlet to Neumann maps of
special resistor networks, that are uniquely recoverable from the measurements.
Inversion on optimal grids has been proposed and analyzed recently, but the
study of noise effects on the inversion has not been carried out. In this paper
we present a numerical study of both the linearized and the nonlinear inverse
problem. We take three different parametrizations of the unknown conductivity,
with the same number of degrees of freedom. We obtain that the parametrization
induced by the inversion on optimal grids is the most efficient of the three,
because it gives the smallest standard deviation of the maximum a posteriori
estimates of the conductivity, uniformly in the domain. For the nonlinear
problem we compute the mean and variance of the maximum a posteriori estimates
of the conductivity, on optimal grids. For small noise, we obtain that the
estimates are unbiased and their variance is very close to the optimal one,
given by the Cramer-Rao bound. For larger noise we use regularization and
quantify the trade-off between reducing the variance and introducing bias in
the solution. Both the full and partial measurement setups are considered.Comment: submitted to Inverse Problems and Imagin
A jamming transition from under- to over-parametrization affects loss landscape and generalization
We argue that in fully-connected networks a phase transition delimits the
over- and under-parametrized regimes where fitting can or cannot be achieved.
Under some general conditions, we show that this transition is sharp for the
hinge loss. In the whole over-parametrized regime, poor minima of the loss are
not encountered during training since the number of constraints to satisfy is
too small to hamper minimization. Our findings support a link between this
transition and the generalization properties of the network: as we increase the
number of parameters of a given model, starting from an under-parametrized
network, we observe that the generalization error displays three phases: (i)
initial decay, (ii) increase until the transition point --- where it displays a
cusp --- and (iii) slow decay toward a constant for the rest of the
over-parametrized regime. Thereby we identify the region where the classical
phenomenon of over-fitting takes place, and the region where the model keeps
improving, in line with previous empirical observations for modern neural
networks.Comment: arXiv admin note: text overlap with arXiv:1809.0934
Network estimation in State Space Model with L1-regularization constraint
Biological networks have arisen as an attractive paradigm of genomic science
ever since the introduction of large scale genomic technologies which carried
the promise of elucidating the relationship in functional genomics. Microarray
technologies coupled with appropriate mathematical or statistical models have
made it possible to identify dynamic regulatory networks or to measure time
course of the expression level of many genes simultaneously. However one of the
few limitations fall on the high-dimensional nature of such data coupled with
the fact that these gene expression data are known to include some hidden
process. In that regards, we are concerned with deriving a method for inferring
a sparse dynamic network in a high dimensional data setting. We assume that the
observations are noisy measurements of gene expression in the form of mRNAs,
whose dynamics can be described by some unknown or hidden process. We build an
input-dependent linear state space model from these hidden states and
demonstrate how an incorporated regularization constraint in an
Expectation-Maximization (EM) algorithm can be used to reverse engineer
transcriptional networks from gene expression profiling data. This corresponds
to estimating the model interaction parameters. The proposed method is
illustrated on time-course microarray data obtained from a well established
T-cell data. At the optimum tuning parameters we found genes TRAF5, JUND, CDK4,
CASP4, CD69, and C3X1 to have higher number of inwards directed connections and
FYB, CCNA2, AKT1 and CASP8 to be genes with higher number of outwards directed
connections. We recommend these genes to be object for further investigation.
Caspase 4 is also found to activate the expression of JunD which in turn
represses the cell cycle regulator CDC2.Comment: arXiv admin note: substantial text overlap with arXiv:1308.359
- …