6,620 research outputs found

    Are tiled display walls needed for astronomy?

    Full text link
    Clustering commodity displays into a Tiled Display Wall (TDW) provides a cost-effective way to create an extremely high resolution display, capable of approaching the image sizes now gen- erated by modern astronomical instruments. Astronomers face the challenge of inspecting single large images, many similar images simultaneously, and heterogeneous but related content. Many research institutions have constructed TDWs on the basis that they will improve the scientific outcomes of astronomical imagery. We test this concept by presenting sample images to astronomers and non- astronomers using a standard desktop display (SDD) and a TDW. These samples include standard English words, wide field galaxy surveys and nebulae mosaics from the Hubble telescope. These experiments show that TDWs provide a better environment for searching for small targets in large images than SDDs. It also shows that astronomers tend to be better at searching images for targets than non-astronomers, both groups are generally better when employing physical navigation as opposed to virtual navigation, and that the combination of two non-astronomers using a TDW rivals the experience of a single astronomer. However, there is also a large distribution in aptitude amongst the participants and the nature of the content also plays a significant role is success.Comment: 19 pages, 15 figures, accepted for publication in PASA (Publications of the Astronomical Society of Australia

    Cache Equalizer: A Cache Pressure Aware Block Placement Scheme for Large-Scale Chip Multiprocessors

    Get PDF
    This paper describes Cache Equalizer (CE), a novel distributed cache management scheme for large scale chip multiprocessors (CMPs). Our work is motivated by large asymmetry in cache sets usages. CE decouples the physical locations of cache blocks from their addresses for the sake of reducing misses caused by destructive interferences. Temporal pressure at the on-chip last-level cache, is continuously collected at a group (comprised of cache sets) granularity, and periodically recorded at the memory controller to guide the placement process. An incoming block is consequently placed at a cache group that exhibits the minimum pressure. CE provides Quality of Service (QoS) by robustly offering better performance than the baseline shared NUCA cache. Simulation results using a full-system simulator demonstrate that CE outperforms shared NUCA caches by an average of 15.5% and by as much as 28.5% for the benchmark programs we examined. Furthermore, evaluations manifested the outperformance of CE versus related CMP cache designs

    Scalable Interactive Volume Rendering Using Off-the-shelf Components

    Get PDF
    This paper describes an application of a second generation implementation of the Sepia architecture (Sepia-2) to interactive volu-metric visualization of large rectilinear scalar fields. By employingpipelined associative blending operators in a sort-last configuration a demonstration system with 8 rendering computers sustains 24 to 28 frames per second while interactively rendering large data volumes (1024x256x256 voxels, and 512x512x512 voxels). We believe interactive performance at these frame rates and data sizes is unprecedented. We also believe these results can be extended to other types of structured and unstructured grids and a variety of GL rendering techniques including surface rendering and shadow map-ping. We show how to extend our single-stage crossbar demonstration system to multi-stage networks in order to support much larger data sizes and higher image resolutions. This requires solving a dynamic mapping problem for a class of blending operators that includes Porter-Duff compositing operators

    Suppression of backscattered diffraction from sub-wavelength ‘moth-eye’ arrays

    No full text
    The eyes and wings of some species of moth are covered with arrays of nanoscale features that dramatically reduce reflection of light. There have been multiple examples where this approach has been adapted for use in antireflection and antiglare technologies with the fabrication of artificial moth-eye surfaces. In this work, the suppression of iridescence caused by the diffraction of light from such artificial regular moth-eye arrays at high angles of incidence is achieved with the use of a new tiled domain design, inspired by the arrangement of features on natural moth-eye surfaces. This bio-mimetic pillar architecture contains high optical rotational symmetry and can achieve high levels of diffraction order power reduction. For example, a tiled design fabricated in silicon and consisting of domains with 9 different orientations of the traditional hexagonal array exhibited a ~96% reduction in the intensity of the ?1 diffraction order. It is suggested natural moth-eye surfaces have evolved a tiled domain structure as it confers efficient antireflection whilst avoiding problems with high angle diffraction. This combination of antireflection and stealth properties increases chances of survival by reducing the risk of the insect being spotted by a predator. Furthermore, the tiled domain design could lead to more effective artificial moth-eye arrays for antiglare and stealth applications

    Seamless tiled display system

    Get PDF
    A modular and scalable seamless tiled display apparatus includes multiple display devices, a screen, and multiple lens assemblies. Each display device is subdivided into multiple sections, and each section is configured to display a sectional image. One of the lens assemblies is optically coupled to each of the sections of each of the display devices to project the sectional image displayed on that section onto the screen. The multiple lens assemblies are configured to merge the projected sectional images to form a single tiled image. The projected sectional images may be merged on the screen by magnifying and shifting the images in an appropriate manner. The magnification and shifting of these images eliminates any visual effect on the tiled display that may result from dead-band regions defined between each pair of adjacent sections on each display device, and due to gaps between multiple display devices

    Towards Precision LSST Weak-Lensing Measurement - I: Impacts of Atmospheric Turbulence and Optical Aberration

    Full text link
    The weak-lensing science of the LSST project drives the need to carefully model and separate the instrumental artifacts from the intrinsic lensing signal. The dominant source of the systematics for all ground based telescopes is the spatial correlation of the PSF modulated by both atmospheric turbulence and optical aberrations. In this paper, we present a full FOV simulation of the LSST images by modeling both the atmosphere and the telescope optics with the most current data for the telescope specifications and the environment. To simulate the effects of atmospheric turbulence, we generated six-layer phase screens with the parameters estimated from the on-site measurements. For the optics, we combined the ray-tracing tool ZEMAX and our simulated focal plane data to introduce realistic aberrations and focal plane height fluctuations. Although this expected flatness deviation for LSST is small compared with that of other existing cameras, the fast f-ratio of the LSST optics makes this focal plane flatness variation and the resulting PSF discontinuities across the CCD boundaries significant challenges in our removal of the systematics. We resolve this complication by performing PCA CCD-by-CCD, and interpolating the basis functions using conventional polynomials. We demonstrate that this PSF correction scheme reduces the residual PSF ellipticity correlation below 10^-7 over the cosmologically interesting scale. From a null test using HST/UDF galaxy images without input shear, we verify that the amplitude of the galaxy ellipticity correlation function, after the PSF correction, is consistent with the shot noise set by the finite number of objects. Therefore, we conclude that the current optical design and specification for the accuracy in the focal plane assembly are sufficient to enable the control of the PSF systematics required for weak-lensing science with the LSST.Comment: Accepted to PASP. High-resolution version is available at http://dls.physics.ucdavis.edu/~mkjee/LSST_weak_lensing_simulation.pd

    BiDi screen: a thin, depth-sensing LCD for 3D interaction using light fields

    Get PDF
    We transform an LCD into a display that supports both 2D multi-touch and unencumbered 3D gestures. Our BiDirectional (BiDi) screen, capable of both image capture and display, is inspired by emerging LCDs that use embedded optical sensors to detect multiple points of contact. Our key contribution is to exploit the spatial light modulation capability of LCDs to allow lensless imaging without interfering with display functionality. We switch between a display mode showing traditional graphics and a capture mode in which the backlight is disabled and the LCD displays a pinhole array or an equivalent tiled-broadband code. A large-format image sensor is placed slightly behind the liquid crystal layer. Together, the image sensor and LCD form a mask-based light field camera, capturing an array of images equivalent to that produced by a camera array spanning the display surface. The recovered multi-view orthographic imagery is used to passively estimate the depth of scene points. Two motivating applications are described: a hybrid touch plus gesture interaction and a light-gun mode for interacting with external light-emitting widgets. We show a working prototype that simulates the image sensor with a camera and diffuser, allowing interaction up to 50 cm in front of a modified 20.1 inch LCD.National Science Foundation (U.S.) (Grant CCF-0729126)Alfred P. Sloan Foundatio

    Beyond Reuse Distance Analysis: Dynamic Analysis for Characterization of Data Locality Potential

    Get PDF
    Emerging computer architectures will feature drastically decreased flops/byte (ratio of peak processing rate to memory bandwidth) as highlighted by recent studies on Exascale architectural trends. Further, flops are getting cheaper while the energy cost of data movement is increasingly dominant. The understanding and characterization of data locality properties of computations is critical in order to guide efforts to enhance data locality. Reuse distance analysis of memory address traces is a valuable tool to perform data locality characterization of programs. A single reuse distance analysis can be used to estimate the number of cache misses in a fully associative LRU cache of any size, thereby providing estimates on the minimum bandwidth requirements at different levels of the memory hierarchy to avoid being bandwidth bound. However, such an analysis only holds for the particular execution order that produced the trace. It cannot estimate potential improvement in data locality through dependence preserving transformations that change the execution schedule of the operations in the computation. In this article, we develop a novel dynamic analysis approach to characterize the inherent locality properties of a computation and thereby assess the potential for data locality enhancement via dependence preserving transformations. The execution trace of a code is analyzed to extract a computational directed acyclic graph (CDAG) of the data dependences. The CDAG is then partitioned into convex subsets, and the convex partitioning is used to reorder the operations in the execution trace to enhance data locality. The approach enables us to go beyond reuse distance analysis of a single specific order of execution of the operations of a computation in characterization of its data locality properties. It can serve a valuable role in identifying promising code regions for manual transformation, as well as assessing the effectiveness of compiler transformations for data locality enhancement. We demonstrate the effectiveness of the approach using a number of benchmarks, including case studies where the potential shown by the analysis is exploited to achieve lower data movement costs and better performance.Comment: Transaction on Architecture and Code Optimization (2014

    Understanding Next-Generation VR: Classifying Commodity Clusters for Immersive Virtual Reality

    Get PDF
    Commodity clusters offer the ability to deliver higher performance computer graphics at lower prices than traditional graphics supercomputers. Immersive virtual reality systems demand notoriously high computational requirements to deliver adequate real-time graphics, leading to the emergence of commodity clusters for immersive virtual reality. Such clusters deliver the graphics power needed by leveraging the combined power of several computers to meet the demands of real-time interactive immersive computer graphics.However, the field of commodity cluster-based virtual reality is still in early stages of development and the field is currently adhoc in nature and lacks order. There is no accepted means for comparing approaches and implementers are left with instinctual or trial-and-error means for selecting an approach.This paper provides a classification system that facilitates understanding not only of the nature of different clustering systems but also the interrelations between them. The system is built from a new model for generalized computer graphics applications, which is based on the flow of data through a sequence of operations over the entire context of the application. Prior models and classification systems have been too focused in context and application whereas the system described here provides a unified means for comparison of works within the field
    corecore