5,052 research outputs found

    The influence of non-imaging detector design on heralded ghost-imaging and ghost-diffraction examined using a triggered ICCD came

    Get PDF
    Ghost imaging and ghost diffraction can be realized by using the spatial correlations between signal and idler photons produced by spontaneous parametric down-conversion. If an object is placed in the signal (idler) path, the spatial correlations between the transmitted photons as measured by a single, non-imaging, “bucket” detector and a scanning detector placed in the idler (signal) path can reveal either the image or diffraction pattern of the object, whereas neither detector signal on its own can. The details of the bucket detector, such as its collection area and numerical aperture, set the number of transverse modes supported by the system. For ghost imaging these details are less important, affecting mostly the sampling time required to produce the image. For ghost diffraction, however, the bucket detector must be filtered to a single, spatially coherent mode. We examine this difference in behavour by using either a multi-mode or single-mode fibre to define the detection aperture. Furthermore, instead of a scanning detector we use a heralded camera so that the image or diffraction pattern produced can be measured across the full field of view. The importance of a single mode detection in the observation of ghost diffraction is equivalent to the need within a classical diffraction experiment to illuminate the aperture with a spatially coherent mode

    Imaging with a small number of photons

    Get PDF
    Low-light-level imaging techniques have application in many diverse fields, ranging from biological sciences to security. We demonstrate a single-photon imaging system based on a time-gated inten- sified CCD (ICCD) camera in which the image of an object can be inferred from very few detected photons. We show that a ghost-imaging configuration, where the image is obtained from photons that have never interacted with the object, is a useful approach for obtaining images with high signal-to-noise ratios. The use of heralded single-photons ensures that the background counts can be virtually eliminated from the recorded images. By applying techniques of compressed sensing and associated image reconstruction, we obtain high-quality images of the object from raw data comprised of fewer than one detected photon per image pixel.Comment: 9 pages, 4 figure

    A Deligne complex for Artin monoids

    Get PDF
    In this paper we introduce and study some geometric objects associated to Artin monoids. The Deligne complex for an Artin group is a cube complex that was introduced by the second author and Davis (1995) to study the K(\pi,1) conjecture for these groups. Using a notion of Artin monoid cosets, we construct a version of the Deligne complex for Artin monoids. We show that for any Artin monoid this cube complex is contractible. Furthermore, we study the embedding of the monoid Deligne complex into the Deligne complex for the corresponding Artin group. We show that for any Artin group this is a locally isometric embedding. In the case of FC-type Artin groups this result can be strengthened to a globally isometric embedding, and it follows that the monoid Deligne complex is CAT(0) and its image in the Deligne complex is convex. We also consider the Cayley graph of an Artin group, and investigate properties of the subgraph spanned by elements of the Artin monoid. Our final results show that for a finite type Artin group, the monoid Cayley graph embeds isometrically, but not quasi-convexly, into the group Cayley graph

    Reinventing Scheduling for Multicore Systems

    Get PDF
    High performance on multicore processors requires that schedulers be reinvented. Traditional schedulers focus on keeping execution units busy by assigning each core a thread to run. Schedulers ought to focus, however, on high utilization of on-chip memory, rather than of execution cores, to reduce the impact of expensive DRAM and remote cache accesses. A challenge in achieving good use of on-chip memory is that the memory is split up among the cores in the form of many small caches. This paper argues for a form of scheduling that assigns each object and its operations to a specific core, moving a thread among the cores as it uses different objects

    Preliminary evaluation of water quality in tidal creeks of Virginia\u27s Eastern Shore in relation to vegetable cultivation

    Get PDF
    In response to concerns raised about the impacts of vegetable cultivation using plastic ground covers on water quality, we have initiated a broad-scale, systematic study of water quality in seaside tidal creeks of Virginia\u27 s Eastern Shore. Our objective was to determine if acute toxicity associated with heavy metals or pesticides was more prevalent in tidal creeks with drainage areas which include this agricultural practice than in those which do not. Though such correlations do not confirm cause and effect, they may serve as the basis for future, more targeted investigations and for some immediate changes in land management practices which, regardless of the specific cause, are likely to produce some remediation. Eleven study sites, located in six different watersheds, were selected to evaluate acute toxicity (from heavy metals and organic pesticides. Land use patterns and acreage within each watershed was determined from aerial photographs. The amount of vegetable plasti-culture in the watersheds of the study sites ranged from 0-13% of total acreage. An assay for heavy metals, based upon enzyme inhibition in a bacterial strain, was used to determine if up to seven metals (including copper) were present at acutely toxic levels. Both water samples and aqueous extracts of sediment samples were tested. A continuous series of 96 hr in situ bioassays using the grass shrimp, Palaemonetes pugio, were conducted from Aug. I, 1996 - Sept. 22, 1996 at each station to assay for toxicity from organic pesticides. Grass shrimp are known to be quite sensitive to insecticides and the in situ bioassay approach provides a continuous means of monitoring for toxic events

    Resolution limits of quantum ghost imaging

    Get PDF
    Quantum ghost imaging uses photon pairs produced from parametric downconversion to enable an alternative method of image acquisition. Information from either one of the photons does not yield an image, but an image can be obtained by harnessing the correlations between them. Here we present an examination of the resolution limits of such ghost imaging systems. In both conventional imaging and quantum ghost imaging the resolution of the image is limited by the point-spread function of the optics associated with the spatially resolving detector. However, whereas in conventional imaging systems the resolution is limited only by this point spread function, in ghost imaging we show that the resolution can be further degraded by reducing the strength of the spatial correlations inherent in the downconversion process

    A Software Approach to Unifying Multicore Caches

    Get PDF
    Multicore chips will have large amounts of fast on-chip cache memory, along with relatively slow DRAM interfaces. The on-chip cache memory, however, will be fragmented and spread over the chip; this distributed arrangement is hard for certain kinds of applications to exploit efficiently, and can lead to needless slow DRAM accesses. First, data accessed from many cores may be duplicated in many caches, reducing the amount of distinct data cached. Second, data in a cache distant from the accessing core may be slow to fetch via the cache coherence protocol. Third, software on each core can only allocate space in the small fraction of total cache memory that is local to that core. A new approach called software cache unification (SCU) addresses these challenges for applications that would be better served by a large shared cache. SCU chooses the on-chip cache in which to cache each item of data. As an application thread reads data items, SCU moves the thread to the core whose on-chip cache contains each item. This allows the thread to read the data quickly if it is already on-chip; if it is not, moving the thread causes the data to be loaded into the chosen on-chip cache. A new file cache for Linux, called MFC, uses SCU to improve performance of file-intensive applications, such as Unix file utilities. An evaluation on a 16-core AMD Opteron machine shows that MFC improves the throughput of file utilities by a factor of 1.6. Experiments with a platform that emulates future machines with less DRAM throughput per core shows that MFC will provide benefit to a growing range of applications.This material is based upon work supported by the National Science Foundation under grant number 0915164

    OpLog: a library for scaling update-heavy data structures

    Get PDF
    Existing techniques (e.g., RCU) can achieve good multi-core scaling for read-mostly data, but for update-heavy data structures only special-purpose techniques exist. This paper presents OpLog, a general-purpose library supporting good scalability for update-heavy data structures. OpLog achieves scalability by logging each update in a low-contention per-core log; it combines logs only when required by a read to the data structure. OpLog achieves generality by logging operations without having to understand them, to ease application to existing data structures. OpLog can further increase performance if the programmer indicates which operations can be combined in the logs. An evaluation shows how to apply OpLog to three update-heavy Linux kernel data structures. Measurements on a 48-core AMD server show that the result significantly improves the performance of the Apache web server and the Exim mail server under certain workloads
    • …
    corecore