44 research outputs found

    On the Chern character in Higher Twisted K-theory and spherical T-duality

    Get PDF
    In this paper, we construct for higher twists that arise from cohomotopy classes, the Chern character in higher twisted K-theory, that maps into higher twisted cohomology. We show that it gives rise to an isomorphism between higher twisted K-theory and higher twisted cohomology over the reals. Finally we compute spherical T-duality in higher twisted K-theory and higher twisted cohomology in very general cases.Comment: 40 pages, revise

    On progressive sharpening, flat minima and generalisation

    Full text link
    We present a new approach to understanding the relationship between loss curvature and input-output model behaviour in deep learning. Specifically, we use existing empirical analyses of the spectrum of deep network loss Hessians to ground an ansatz tying together the loss Hessian and the input-output Jacobian of a deep neural network over training samples throughout training. We then prove a series of theoretical results which quantify the degree to which the input-output Jacobian of a model approximates its Lipschitz norm over a data distribution, and deduce a novel generalisation bound in terms of the empirical Jacobian. We use our ansatz, together with our theoretical results, to give a new account of the recently observed progressive sharpening phenomenon, as well as the generalisation properties of flat minima. Experimental evidence is provided to validate our claims

    Accounting for the Dependence of Coil Sensitivity on Sample Thickness and Lift-Off in Inductively Coupled Photoconductance Measurements

    Get PDF
    Inductively coupled photoconductance measurements are widely used to characterize carrier recombination in crystalline silicon. We show that, contrary to what is usually supposed, the sensitivity of such measurements is significantly dependent on sample thickness in the range of typical wafer thicknesses, due to the attenuation of the magnetic field with distance from the coil. Sample thickness, as well as any separation from the coil, should, therefore, be taken into account in system calibration in order to avoid systematic errors. We investigate the magnitude of this effect both experimentally and via analytical and finite-element modeling for a range of commercial photoconductance measurement systems with varying coil geometry. Finite-element modeling is used to identify the functional form of the attenuation in the regime of interest, and simple formulae are derived which allow the experimentalist to correct for sample thickness and lift-off. Close agreement is found between modeled and experimental attenuation behavior. Finite-element modeling is also used to evaluate the magnitude of skin effects, which are found to have a minor influence on the measured conductance for the most highly conductive samples, and to determine the lateral spatial variation of the coil sensitivity, which is important for lifetime imaging techniques where photoconductance measurements are used for calibration

    On Quantizing Implicit Neural Representations

    Full text link
    The role of quantization within implicit/coordinate neural networks is still not fully understood. We note that using a canonical fixed quantization scheme during training produces poor performance at low-rates due to the network weight distributions changing over the course of training. In this work, we show that a non-uniform quantization of neural weights can lead to significant improvements. Specifically, we demonstrate that a clustered quantization enables improved reconstruction. Finally, by characterising a trade-off between quantization and network capacity, we demonstrate that it is possible (while memory inefficient) to reconstruct signals using binary neural networks. We demonstrate our findings experimentally on 2D image reconstruction and 3D radiance fields; and show that simple quantization methods and architecture search can achieve compression of NeRF to less than 16kb with minimal loss in performance (323x smaller than the original NeRF).Comment: 10 pages, 10 figure
    corecore