111 research outputs found
Decoherence-induced conductivity in the discrete 1D Anderson model: A novel approach to even-order generalized Lyapunov exponents
A recently proposed statistical model for the effects of decoherence on
electron transport manifests a decoherence-driven transition from
quantum-coherent localized to ohmic behavior when applied to the
one-dimensional Anderson model. Here we derive the resistivity in the ohmic
case and show that the transition to localized behavior occurs when the
coherence length surpasses a value which only depends on the second-order
generalized Lyapunov exponent . We determine the exact value of
of an infinite system for arbitrary uncorrelated disorder and
electron energy. Likewise all higher even-order generalized Lyapunov exponents
can be calculated, as exemplified for fourth order. An approximation for the
localization length (inverse standard Lyapunov exponent) is presented, by
assuming a log-normal limiting distribution for the dimensionless conductance
. This approximation works well in the limit of weak disorder, with the
exception of the band edges and the band center.Comment: 12 pages, 5 figure
Multidataset Incremental Training for Optic Disc Segmentation
When convolutional neural networks are applied to image
segmentation results depend greatly on the data sets used to train the
networks. Cloud providers support multi GPU and TPU virtual machines
making the idea of cloud-based segmentation as service attractive. In this
paper we study the problem of building a segmentation service, where
images would come from different acquisition instruments, by training a
generalized U-Net with images from a single or several datasets. We also
study the possibility of training with a single instrument and perform
quick retrains when more data is available. As our example we perform
segmentation of Optic Disc in fundus images which is useful for glau coma diagnosis. We use two publicly available data sets (RIM-One V3,
DRISHTI) for individual, mixed or incremental training. We show that
multidataset or incremental training can produce results that are simi lar to those published by researchers who use the same dataset for both
training and validation
- …