2,796 research outputs found
Part-to-whole Registration of Histology and MRI using Shape Elements
Image registration between histology and magnetic resonance imaging (MRI) is
a challenging task due to differences in structural content and contrast. Too
thick and wide specimens cannot be processed all at once and must be cut into
smaller pieces. This dramatically increases the complexity of the problem,
since each piece should be individually and manually pre-aligned. To the best
of our knowledge, no automatic method can reliably locate such piece of tissue
within its respective whole in the MRI slice, and align it without any prior
information. We propose here a novel automatic approach to the joint problem of
multimodal registration between histology and MRI, when only a fraction of
tissue is available from histology. The approach relies on the representation
of images using their level lines so as to reach contrast invariance. Shape
elements obtained via the extraction of bitangents are encoded in a
projective-invariant manner, which permits the identification of common pieces
of curves between two images. We evaluated the approach on human brain
histology and compared resulting alignments against manually annotated ground
truths. Considering the complexity of the brain folding patterns, preliminary
results are promising and suggest the use of characteristic and meaningful
shape elements for improved robustness and efficiency.Comment: Paper accepted at ICCV Workshop (Bio-Image Computing
Semi-analytical modelling of linear mode coupling in few-mode fibers
This paper reviews and extends a method for the semi-analytical solution of the coupled linear differential equations that describe the linear mode coupling arising in few-mode fibers due to waveguide imperfections. The semi-analytical solutions obtained proved to be accurate when compared to numerical solution methods. These solutions were integrated into a multi-section model with split-steps for mode dispersion and mode coupling. Simulations using this model matched the analytical predictions for the statistics of group-delays in few-mode fiber links, considering different coupling regimes with and without mode delay management
Guidance, flight mechanics and trajectory optimization. Volume 6 - The N-body problem and special perturbation techniques
Analytical formulations and numerical integration methods for many body problem and special perturbative technique
Recommended from our members
Finding Critical and Gradient-Flat Points of Deep Neural Network Loss Functions
Despite the fact that the loss functions of deep neural networks are highly non-convex, gradient-based optimization algorithms converge to approximately the same performance from many random initial points. This makes neural networks easy to train, which, combined with their high representational capacity and implicit and explicit regularization strategies, leads to machine-learned algorithms of high quality with reasonable computational cost in a wide variety of domains.
One thread of work has focused on explaining this phenomenon by numerically characterizing the local curvature at critical points of the loss function, where gradients are zero. Such studies have reported that the loss functions used to train neural networks have no local minima that are much worse than global minima, backed up by arguments from random matrix theory. More recent theoretical work, however, has suggested that bad local minima do exist.
In this dissertation, we show that one cause of this gap is that the methods used to numerically find critical points of neural network losses suffer, ironically, from a bad local minimum problem of their own. This problem is caused by gradient-flat points, where the gradient vector is in the kernel of the Hessian matrix of second partial derivatives. At these points, the loss function becomes, to second order, linear in the direction of the gradient, which violates the assumptions necessary to guarantee convergence for second order critical point-finding methods. We present evidence that approximately gradient-flat points are a common feature of several prototypical neural network loss functions
- …