1,340 research outputs found

    Subgrouped Real Time Recurrent Learning Neural Networks

    Get PDF
    A subgrouped Real Time Recurrent Learning (RTRL) network was evaluated. The one layer net successfully learns the XOR problem, and can be trained to perform time dependent functions. The net was tested as a predictor on the behavior of a signal, based on past behavior. While the net was not able to predict the signal\u27s future behavior, it tracked the signal closely. The net was also tested as a classifier for time varying phenomena; for the differentiation of five classes of vehicle images based on features extracted from the visual information. The net achieved a 99.2% accuracy in recognizing the five vehicle classes. The behavior of the subgrouped RTRL net was compared to the RTRL network described in Capt R. Lindsey\u27s AFIT Master\u27s thesis. The subgrouped RTRL performance proved close to the RTRL network in accuracy while reducing the time required to train the network for multiple output (classification) problems

    Perturbation theory for the effective diffusion constant in a medium of random scatterer

    Full text link
    We develop perturbation theory and physically motivated resummations of the perturbation theory for the problem of a tracer particle diffusing in a random media. The random media contains point scatterers of density ρ\rho uniformly distributed through out the material. The tracer is a Langevin particle subjected to the quenched random force generated by the scatterers. Via our perturbative analysis we determine when the random potential can be approximated by a Gaussian random potential. We also develop a self-similar renormalisation group approach based on thinning out the scatterers, this scheme is similar to that used with success for diffusion in Gaussian random potentials and agrees with known exact results. To assess the accuracy of this approximation scheme its predictions are confronted with results obtained by numerical simulation.Comment: 22 pages, 6 figures, IOP (J. Phys. A. style

    Implementation of a Wavefront-Sensing Algorithm

    Get PDF
    A computer program has been written as a unique implementation of an image-based wavefront-sensing algorithm reported in "Iterative-Transform Phase Retrieval Using Adaptive Diversity" (GSC-14879-1), NASA Tech Briefs, Vol. 31, No. 4 (April 2007), page 32. This software was originally intended for application to the James Webb Space Telescope, but is also applicable to other segmented-mirror telescopes. The software is capable of determining optical-wavefront information using, as input, a variable number of irradiance measurements collected in defocus planes about the best focal position. The software also uses input of the geometrical definition of the telescope exit pupil (otherwise denoted the pupil mask) to identify the locations of the segments of the primary telescope mirror. From the irradiance data and mask information, the software calculates an estimate of the optical wavefront (a measure of performance) of the telescope generally and across each primary mirror segment specifically. The software is capable of generating irradiance data, wavefront estimates, and basis functions for the full telescope and for each primary-mirror segment. Optionally, each of these pieces of information can be measured or computed outside of the software and incorporated during execution of the software

    Zero-Shot Learning by Convex Combination of Semantic Embeddings

    Full text link
    Several recent publications have proposed methods for mapping images into continuous semantic embedding spaces. In some cases the embedding space is trained jointly with the image transformation. In other cases the semantic embedding space is established by an independent natural language processing task, and then the image transformation into that space is learned in a second stage. Proponents of these image embedding systems have stressed their advantages over the traditional \nway{} classification framing of image understanding, particularly in terms of the promise for zero-shot learning -- the ability to correctly annotate images of previously unseen object categories. In this paper, we propose a simple method for constructing an image embedding system from any existing \nway{} image classifier and a semantic word embedding model, which contains the \n class labels in its vocabulary. Our method maps images into the semantic embedding space via convex combination of the class label embedding vectors, and requires no additional training. We show that this simple and direct method confers many of the advantages associated with more complex image embedding schemes, and indeed outperforms state of the art methods on the ImageNet zero-shot learning task

    Distributed Computing Architecture for Image-Based Wavefront Sensing and 2 D FFTs

    Get PDF
    Image-based wavefront sensing (WFS) provides significant advantages over interferometric-based wavefi-ont sensors such as optical design simplicity and stability. However, the image-based approach is computational intensive, and therefore, specialized high-performance computing architectures are required in applications utilizing the image-based approach. The development and testing of these high-performance computing architectures are essential to such missions as James Webb Space Telescope (JWST), Terrestial Planet Finder-Coronagraph (TPF-C and CorSpec), and Spherical Primary Optical Telescope (SPOT). The development of these specialized computing architectures require numerous two-dimensional Fourier Transforms, which necessitate an all-to-all communication when applied on a distributed computational architecture. Several solutions for distributed computing are presented with an emphasis on a 64 Node cluster of DSPs, multiple DSP FPGAs, and an application of low-diameter graph theory. Timing results and performance analysis will be presented. The solutions offered could be applied to other all-to-all communication and scientifically computationally complex problems

    Optimal Padding for the Two-Dimensional Fast Fourier Transform

    Get PDF
    One-dimensional Fast Fourier Transform (FFT) operations work fastest on grids whose size is divisible by a power of two. Because of this, padding grids (that are not already sized to a power of two) so that their size is the next highest power of two can speed up operations. While this works well for one-dimensional grids, it does not work well for two-dimensional grids. For a two-dimensional grid, there are certain pad sizes that work better than others. Therefore, the need exists to generalize a strategy for determining optimal pad sizes. There are three steps in the FFT algorithm. The first is to perform a one-dimensional transform on each row in the grid. The second step is to transpose the resulting matrix. The third step is to perform a one-dimensional transform on each row in the resulting grid. Steps one and three both benefit from padding the row to the next highest power of two, but the second step needs a novel approach. An algorithm was developed that struck a balance between optimizing the grid pad size with prime factors that are small (which are optimal for one-dimensional operations), and with prime factors that are large (which are optimal for two-dimensional operations). This algorithm optimizes based on average run times, and is not fine-tuned for any specific application. It increases the amount of times that processor-requested data is found in the set-associative processor cache. Cache retrievals are 4-10 times faster than conventional memory retrievals. The tested implementation of the algorithm resulted in faster execution times on all platforms tested, but with varying sized grids. This is because various computer architectures process commands differently. The test grid was 512 512. Using a 540 540 grid on a Pentium V processor, the code ran 30 percent faster. On a PowerPC, a 256x256 grid worked best. A Core2Duo computer preferred either a 1040x1040 (15 percent faster) or a 1008x1008 (30 percent faster) grid. There are many industries that can benefit from this algorithm, including optics, image-processing, signal-processing, and engineering applications

    Variable Sampling Mapping

    Get PDF
    The performance of an optical system (for example, a telescope) is limited by the misalignments and manufacturing imperfections of the optical elements in the system. The impact of these misalignments and imperfections can be quantified by the phase variations imparted on light traveling through the system. Phase retrieval is a methodology for determining these variations. Phase retrieval uses images taken with the optical system and using a light source of known shape and characteristics. Unlike interferometric methods, which require an optical reference for comparison, and unlike Shack-Hartmann wavefront sensors that require special optical hardware at the optical system's exit pupil, phase retrieval is an in situ, image-based method for determining the phase variations of light at the system s exit pupil. Phase retrieval can be used both as an optical metrology tool (during fabrication of optical surfaces and assembly of optical systems) and as a sensor used in active, closed-loop control of an optical system, to optimize performance. One class of phase-retrieval algorithms is the iterative transform algorithm (ITA). ITAs estimate the phase variations by iteratively enforcing known constraints in the exit pupil and at the detector, determined from modeled or measured data. The Variable Sampling Mapping (VSM) technique is a new method for enforcing these constraints in ITAs. VSM is an open framework for addressing a wide range of issues that have previously been considered detrimental to high-accuracy phase retrieval, including undersampled images, broadband illumination, images taken at or near best focus, chromatic aberrations, jitter or vibration of the optical system or detector, and dead or noisy detector pixels. The VSM is a model-to-data mapping procedure. In VSM, fully sampled electric fields at multiple wavelengths are modeled inside the phase-retrieval algorithm, and then these fields are mapped to intensities on the light detector, using the properties of the detector and optical system, for comparison with measured data. Ultimately, this model-to-data mapping procedure enables a more robust and accurate way of incorporating the exit-pupil and image detector constraints, which are fundamental to the general class of ITA phase retrieval algorithms

    Performativity, fabrication and trust: exploring computer-mediated moderation

    Get PDF
    Based on research conducted in an English secondary school, this paper explores computer mediated moderation as a performative tool. The Module Assessment Meeting (MAM) was the moderation approach under investigation. I mobilise ethnographic data generated by a key informant, and triangulated with that from other actors in the setting, in order to examine some of the meanings underpinning moderation within a performative environment. Drawing on the work of Ball (2003), Lyotard (1979) and Foucault (1977, 1979), I argue that in this particular case performativity has become entrenched in teachers’ day-to-day practices, and not only affects those practices but also teachers’ sense of self. I suggest that MAM represented performative and fabricated conditions and (re)defined what the key participant experienced as a vital constituent of her educational identities - trust. From examining the case in point, I hope to have illustrated for those interested in teachers’ work some of the implications of the interface between technology and performativity
    • 

    corecore