133 research outputs found

    Continuously Crossing u=z in the H3+ Boundary CFT

    Full text link
    For AdS boundary conditions, we give a solution of the H3+ two point function involving degenerate field with SL(2)-label b^{-2}/2, which is defined on the full (u,z) unit square. It consists of two patches, one for z<u and one for u<z. Along the u=z "singularity", the solutions from both patches are shown to have finite limits and are merged continuously as suggested by the work of Hosomichi and Ribault. From this two point function, we can derive b^{-2}/2-shift equations for AdS_2 D-branes. We show that discrete as well as continuous AdS_2 branes are consistent with our novel shift equations without any new restrictions.Comment: version to appear in JHEP - 12 pages now; sign error with impact on some parts of the interpretation fixed; material added to become more self-contained; role of bulk-boundary OPE in section 4 more carefully discussed; 3 references adde

    Learning in stochastic neural networks for constraint satisfaction problems

    Get PDF
    Researchers describe a newly-developed artificial neural network algorithm for solving constraint satisfaction problems (CSPs) which includes a learning component that can significantly improve the performance of the network from run to run. The network, referred to as the Guarded Discrete Stochastic (GDS) network, is based on the discrete Hopfield network but differs from it primarily in that auxiliary networks (guards) are asymmetrically coupled to the main network to enforce certain types of constraints. Although the presence of asymmetric connections implies that the network may not converge, it was found that, for certain classes of problems, the network often quickly converges to find satisfactory solutions when they exist. The network can run efficiently on serial machines and can find solutions to very large problems (e.g., N-queens for N as large as 1024). One advantage of the network architecture is that network connection strengths need not be instantiated when the network is established: they are needed only when a participating neural element transitions from off to on. They have exploited this feature to devise a learning algorithm, based on consistency techniques for discrete CSPs, that updates the network biases and connection strengths and thus improves the network performance

    Restoration of HST images with missing data

    Get PDF
    Missing data are a fairly common problem when restoring Hubble Space Telescope observations of extended sources. On Wide Field and Planetary Camera images cosmic ray hits and CCD hot spots are the prevalent causes of data losses, whereas on Faint Object Camera images data are lossed due to reseaux marks, blemishes, areas of saturation and the omnipresent frame edges. This contribution discusses a technique for 'filling in' missing data by statistical inference using information from the surrounding pixels. The major gain consists in minimizing adverse spill-over effects to the restoration in areas neighboring those where data are missing. When the mask delineating the support of 'missing data' is made dynamic, cosmic ray hits, etc. can be detected on the fly during restoration

    A low-cost vector processor boosting compute-intensive image processing operations

    Get PDF
    Low-cost vector processing (VP) is within reach of everyone seriously engaged in scientific computing. The advent of affordable add-on VP-boards for standard workstations complemented by mathematical/statistical libraries is beginning to impact compute-intensive tasks such as image processing. A case in point in the restoration of distorted images from the Hubble Space Telescope. A low-cost implementation is presented of the standard Tarasko-Richardson-Lucy restoration algorithm on an Intel i860-based VP-board which is seamlessly interfaced to a commercial, interactive image processing system. First experience is reported (including some benchmarks for standalone FFT's) and some conclusions are drawn

    Scientific computing in the 1990ies: An astronomical perspective

    Get PDF
    The compute performance, storage capability, degree of networking, and usability of modern computer hardware have enormously progressed in the past decade. These hardware advances are not paralleled by an equivalent increase in software productivity. Among astronomers the need is gradually perceived to discuss questions such as whether we are prepared to meet the pending challenge of vector and massively parallel computers. Therefore, a moderated, time-limited and access-restricted, wide-area network discussion forum is proposed for having a first, broad-minded go at the question of whether our current software efforts are heading in the right direction. The main topics, goals, means, and form of the proposed discussion process are presented

    Operation Video: Eine Technik des Nahsehens und ihr spezifisches Subjekt: die Videokünstlerin der 1970er Jahre

    Get PDF
    Der Band befragt die Videokunst der frühen 1970er Jahre hinsichtlich ihrer Interventionen in Körper- und Mediendiskurse bzw. Subjektdiskurse der Zeit. Im Anschluss an Walter Benjamin entwickelt die Studie hierfür ein operatives Bildverständnis und arbeitet "den wachen Sinn für die Signatur der Zeit" aus den Praktiken und Diskursen der Videokunst heraus. Analysen zu Videowerken von Eleanor Antin, Lynda Benglis, Lili Dujourie, Sanja Ivecoviç, Martha Rosler, Lisa Steele, Hannah Wilke und anderen Künstlern und Künstlerinnen belegen einen repräsentationskritischen Einsatz des Mediums, der die Arbeit an, mit und in Bildern als eine ebenso ästhetische wie politische Argumentation lesbar macht

    Optical measurements of phase steps in segmented mirrors - fundamental precision limits

    Full text link
    Phase steps are an important type of wavefront aberrations generated by large telescopes with segmented mirrors. In a closed-loop correction cycle these phase steps have to be measured with the highest possible precision using natural reference stars, that is with a small number of photons. In this paper the classical Fisher information of statistics is used for calculating the Cramer-Rao bound, which determines the limit to the precision with which the height of the steps can be estimated in an unbiased fashion with a given number of photons and a given measuring device. Four types of measurement devices are discussed: a Shack-Hartmann sensor with one small cylindrical lenslet covering a sub-aperture centred over a border, a modified Mach-Zehnder interferometer, a Foucault test, and a curvature sensor. The Cramer-Rao bound is calculated for all sensors under ideal conditions, that is narrowband measurements without additional noise or disturbances apart from the photon shot noise. This limit is compared with the ultimate quantum statistical limit for the estimate of such a step which is independent of the measuring device. For the Shack-Hartmann sensor, the effects on the Cramer-Rao bound of broadband measurements, finite sampling, and disturbances such as atmospheric seeing and detector readout noise are also investigated. The methods presented here can be used to compare the precision limits of various devices for measuring phase steps and for optimising the parameters of the devices. Under ideal conditions the Shack-Hartmann and the Foucault devices nearly attain the ultimate quantum statistical limits, whereas the Mach-Zehnder and the curvature devices each require approximately twenty times as many photons in order to reach the same precision.Comment: 23 pages, 19 figures, to be submitted to Journal of Modern Optic

    Boundary conformal field theory analysis of the H3+ model

    Get PDF
    [no abstract
    corecore