2,601 research outputs found

    Optimizing Abstract Abstract Machines

    Full text link
    The technique of abstracting abstract machines (AAM) provides a systematic approach for deriving computable approximations of evaluators that are easily proved sound. This article contributes a complementary step-by-step process for subsequently going from a naive analyzer derived under the AAM approach, to an efficient and correct implementation. The end result of the process is a two to three order-of-magnitude improvement over the systematically derived analyzer, making it competitive with hand-optimized implementations that compute fundamentally less precise results.Comment: Proceedings of the International Conference on Functional Programming 2013 (ICFP 2013). Boston, Massachusetts. September, 201

    Adaptive, spatially-varying aberration correction for real-time holographic projectors.

    Get PDF
    A method of generating an aberration- and distortion-free wide-angle holographically projected image in real time is presented. The target projector is first calibrated using an automated adaptive-optical mechanism. The calibration parameters are then fed into the hologram generation program, which applies a novel piece-wise aberration correction algorithm. The method is found to offer hologram generation times up to three orders of magnitude faster than the standard method. A projection of an aberration- and distortion-free image with a field of view of 90x45 degrees is demonstrated. The implementation on a mid-range GPU achieves high resolution at a frame rate up to 12fps. The presented methods are automated and can be performed on any holographic projector.Engineering and Physical Sciences Research CouncilThis is the final version of the article. It first appeared from the Optical Society of America via https://doi.org/10.1364/OE.24.01574

    Scale-invariant temporal history (SITH): optimal slicing of the past in an uncertain world

    Full text link
    In both the human brain and any general artificial intelligence (AI), a representation of the past is necessary to predict the future. However, perfect storage of all experiences is not possible. One possibility, utilized in many applications, is to retain information about the past in a buffer. A limitation of this approach is that although events in the buffer are represented with perfect accuracy, the resources necessary to represent information at a particular time scale go up rapidly. Here we present a neurally-plausible, compressed, scale-free memory representation we call Scale-Invariant Temporal History (SITH). This representation covers an exponentially large period of time in the past at the cost of sacrificing temporal accuracy for events further in the past. The form of this decay is scale-invariant and can be shown to be optimal in that it is able to respond to worlds with a wide range of time scales. We demonstrate the utility of this representation in learning to play a simple video game. In this environment, SITH exhibits better learning performance than a fixed-size buffer history representation. Whereas the buffer performs well as long as the temporal dependencies can be represented within the buffer, SITH performs well over a much larger range of time scales for the same amount of resources. Finally, we discuss how the application of SITH, along with other human-inspired models of cognition, could improve reinforcement and machine learning algorithms in general.First author draf
    • …
    corecore