405,095 research outputs found

    AutoDIAL: Automatic DomaIn Alignment Layers

    Full text link
    Classifiers trained on given databases perform poorly when tested on data acquired in different settings. This is explained in domain adaptation through a shift among distributions of the source and target domains. Attempts to align them have traditionally resulted in works reducing the domain shift by introducing appropriate loss terms, measuring the discrepancies between source and target distributions, in the objective function. Here we take a different route, proposing to align the learned representations by embedding in any given network specific Domain Alignment Layers, designed to match the source and target feature distributions to a reference one. Opposite to previous works which define a priori in which layers adaptation should be performed, our method is able to automatically learn the degree of feature alignment required at different levels of the deep network. Thorough experiments on different public benchmarks, in the unsupervised setting, confirm the power of our approach.Comment: arXiv admin note: substantial text overlap with arXiv:1702.06332 added supplementary materia

    Nematic cells with defect-patterned alignment layers

    Full text link
    Using Monte Carlo simulations of the Lebwohl--Lasher model we study the director ordering in a nematic cell where the top and bottom surfaces are patterned with a lattice of ±1\pm 1 point topological defects of lattice spacing aa. We find that the nematic order depends crucially on the ratio of the height of the cell HH to aa. When H/a0.9H/a \gtrsim 0.9 the system is very well--ordered and the frustration induced by the lattice of defects is relieved by a network of half--integer defect lines which emerge from the point defects and hug the top and bottom surfaces of the cell. When H/a0.9H/a \lesssim 0.9 the system is disordered and the half--integer defect lines thread through the cell joining point defects on the top and bottom surfaces. We present a simple physical argument in terms of the length of the defect lines to explain these results. To facilitate eventual comparison with experimental systems we also simulate optical textures and study the switching behavior in the presence of an electric field

    Importance of alignment layers in blue phase liquid crystal devices

    Get PDF
    In this paper we present how alignment layers affect Blue Phase Liquid Crystals and how we can use this effect to our advantage. We argue that contrary to the prevailing perception alignment layers can be of vital importance to blue phase liquid crystal based devices

    Internal alignment and position resolution of the silicon tracker of DAMPE determined with orbit data

    Full text link
    The DArk Matter Particle Explorer (DAMPE) is a space-borne particle detector designed to probe electrons and gamma-rays in the few GeV to 10 TeV energy range, as well as cosmic-ray proton and nuclei components between 10 GeV and 100 TeV. The silicon-tungsten tracker-converter is a crucial component of DAMPE. It allows the direction of incoming photons converting into electron-positron pairs to be estimated, and the trajectory and charge (Z) of cosmic-ray particles to be identified. It consists of 768 silicon micro-strip sensors assembled in 6 double layers with a total active area of 6.6 m2^2. Silicon planes are interleaved with three layers of tungsten plates, resulting in about one radiation length of material in the tracker. Internal alignment parameters of the tracker have been determined on orbit, with non-showering protons and helium nuclei. We describe the alignment procedure and present the position resolution and alignment stability measurements

    Deposition of biaxially aligned YSZ layers by dual unbalanced magnetron sputtering

    Get PDF
    Biaxially aligned YSZ (Yttria Stabilised Zirconia) layers were deposited by unbalanced magnetron sputtering, in a dual magnetron geometry. The unbalanced magnetrons were mounted in such a way that the angle between the target- and substrate normal was 55° for both magnetrons. The target-substrate distance was 13 cm for both magnetrons. A better homogeneity in deposition rate and biaxial alignment was obtained with respect to depositions with one unbalanced magnetron. The YSZ layers were characterized by XRD θ/2θ and (111) pole figures and showed a [001] out-of-plane orientation and a [110] in-plane orientation. The best biaxially aligned YSZ layers obtained so far, showed a FWHM of 21° in (111) pole figures. The influence of the magnet configuration (closed-field or mirror-field) and sputter conditions on the biaxial alignment was investigated. Gauss and Langmuir probe measurements were performed to investigate the influence of the magnet configuration and sputter conditions on the plasma density and the magnetic field lines

    Direct Feedback Alignment with Sparse Connections for Local Learning

    Get PDF
    Recent advances in deep neural networks (DNNs) owe their success to training algorithms that use backpropagation and gradient-descent. Backpropagation, while highly effective on von Neumann architectures, becomes inefficient when scaling to large networks. Commonly referred to as the weight transport problem, each neuron's dependence on the weights and errors located deeper in the network require exhaustive data movement which presents a key problem in enhancing the performance and energy-efficiency of machine-learning hardware. In this work, we propose a bio-plausible alternative to backpropagation drawing from advances in feedback alignment algorithms in which the error computation at a single synapse reduces to the product of three scalar values. Using a sparse feedback matrix, we show that a neuron needs only a fraction of the information previously used by the feedback alignment algorithms. Consequently, memory and compute can be partitioned and distributed whichever way produces the most efficient forward pass so long as a single error can be delivered to each neuron. Our results show orders of magnitude improvement in data movement and 2×2\times improvement in multiply-and-accumulate operations over backpropagation. Like previous work, we observe that any variant of feedback alignment suffers significant losses in classification accuracy on deep convolutional neural networks. By transferring trained convolutional layers and training the fully connected layers using direct feedback alignment, we demonstrate that direct feedback alignment can obtain results competitive with backpropagation. Furthermore, we observe that using an extremely sparse feedback matrix, rather than a dense one, results in a small accuracy drop while yielding hardware advantages. All the code and results are available under https://github.com/bcrafton/ssdfa.Comment: 15 pages, 8 figure
    corecore