711 research outputs found

    EMCCDs for space applications

    Get PDF
    This paper describes a qualification programme for Electron-Multiplication Charge Coupled Devices (EMCCDs) for use in space applications. While the presented results are generally applicable, the programme was carried out in the context of CCD development for the Radial Velocity Spectrometer (RVS) instrument on the European Space Agency's cornerstone Gaia mission. We discuss the issues of device radiation tolerance, charge transfer efficiency at low signal levels and life time effects on the electron-multiplication gain. The development of EMCCD technology to allow operation at longer wavelengths using high resistivity silicon, and the cryogenic characterisation of EMCCDs are also described

    Beyond Flicker, Beyond Blur: View-coherent Metameric Light Fields for Foveated Display

    Get PDF
    Ventral metamers, pairs of images which may differ substantially in the periphery, but are perceptually identical, offer exciting new possibilities in foveated rendering and image compression, as well as offering insights into the human visual system. However, existing lit-erature has mainly focused on creating metamers of static images. In this work, we develop a method for creating sequences of metameric frames, specifically light fields, with enforced consistency along the temporal, or angular, dimension. This greatly expands the potential applications for these metamers, and expanding metamers along the third dimension offers further new potential for compression

    Optimizing vision and visuals: lectures on cameras, displays and perception

    Get PDF
    The evolution of the internet is underway, where immersive virtual 3D environments (commonly known as metaverse or telelife) will replace flat 2D interfaces. Crucial ingredients in this transformation are next-generation displays and cameras representing genuinely 3D visuals while meeting the human visual system's perceptual requirements. This course will provide a fast-paced introduction to optimization methods for next-generation interfaces geared towards immersive virtual 3D environments. Firstly, we will introduce lensless cameras for high dimensional compressive sensing (e.g., single exposure capture to a video or one-shot 3D). Our audience will learn to process images from a lensless camera at the end. Secondly, we introduce holographic displays as a potential candidate for next-generation displays. By the end of this course, you will learn to create your 3D images that can be viewed using a standard holographic display. Lastly, we will introduce perceptual guidance that could be an integral part of the optimization routines of displays and cameras. Our audience will gather experience in integrating perception to display and camera optimizations. This course targets a wide range of audiences, from domain experts to newcomers. To do so, examples from this course will be based on our in-house toolkit to be replicable for future use. The course material will provide example codes and a broad survey with crucial information on cameras, displays and perception

    Design Notations for Creating Virtual Environment

    Get PDF
    In this paper we propose a new design notation to improve communication in teams creating virtual environments (VEs). Our experience in creating VEs is that the programmers and designers have no common formalisms which results in ambiguity and misunderstanding in creating the final VE. After teaching a selection of specification techniques to design students, we realized that we needed to create our own formalism. This we used with designers who found the notation useful and intuitive. More importantly, the programmers were able to interpret the formalism more accurately and reduce the time required to create virtual environments

    Synthesis of environment maps for mixed reality

    Get PDF
    When rendering virtual objects in a mixed reality application, it is helpful to have access to an environment map that captures the appearance of the scene from the perspective of the virtual object. It is straightforward to render virtual objects into such maps, but capturing and correctly rendering the real components of the scene into the map is much more challenging. This information is often recovered from physical light probes, such as reflective spheres or fisheye cameras, placed at the location of the virtual object in the scene. For many application areas, however, real light probes would be intrusive or impractical. Ideally, all of the information necessary to produce detailed environment maps could be captured using a single device. We introduce a method using an RGBD camera and a small fisheye camera, contained in a single unit, to create environment maps at any location in an indoor scene. The method combines the output from both cameras to correct for their limited field of view and the displacement from the virtual object, producing complete environment maps suitable for rendering the virtual content in real time. Our method improves on previous probeless approaches by its ability to recover high-frequency environment maps. We demonstrate how this can be used to render virtual objects which shadow, reflect and refract their environment convincingly

    Degrees of Sharing: Proximate Media Sharing and Messaging by Young People in Khayelitsha

    Get PDF
    This paper explores the phone and mobile media sharing relationships of a group of young mobile phone users in Khayelitsha, South Africa. Intensive sharing took place within peer and intimate relationships, while resource sharing characterized relationships with a more extensive circle, including members of the older generation. Phones were kept open to others to avoid inferences of stinginess, disrespect, or secretiveness and the use of privacy features (such as passwords) was complicated by conflicts between an ethos of mutual support and the protection of individual property and privacy. Collocated phone use trumped online sharing but media on phones constituted public personae similar to social media ‘profiles’. Proximate sharing within close relationships allowed social display, relationship- building and deference to authority. We suggest changes to current file-based interfaces for Bluetooth pairing, media ‘galleries’, and peer-to-peer text communication to better support such proximate exchanges of media and messaging

    Beyond blur: real-time ventral metamers for foveated rendering

    Get PDF
    To peripheral vision, a pair of physically different images can look the same. Such pairs are metamers relative to each other, just as physically-different spectra of light are perceived as the same color. We propose a real-time method to compute such ventral metamers for foveated rendering where, in particular for near-eye displays, the largest part of the framebuffer maps to the periphery. This improves in quality over state-of-the-art foveation methods which blur the periphery. Work in Vision Science has established how peripheral stimuli are ventral metamers if their statistics are similar. Existing methods, however, require a costly optimization process to find such metamers. To this end, we propose a novel type of statistics particularly well-suited for practical real-time rendering: smooth moments of steerable filter responses. These can be extracted from images in time constant in the number of pixels and in parallel over all pixels using a GPU. Further, we show that they can be compressed effectively and transmitted at low bandwidth. Finally, computing realizations of those statistics can again be performed in constant time and in parallel. This enables a new level of quality for foveated applications such as such as remote rendering, level-of-detail and Monte-Carlo denoising. In a user study, we finally show how human task performance increases and foveation artifacts are less suspicious, when using our method compared to common blurring

    Metameric Inpainting for Image Warping

    Get PDF
    Image-warping , a per-pixel deformation of one image into another, is an essential component in immersive visual experiences such as virtual reality or augmented reality. The primary issue with image warping is disocclusions, where occluded (and hence unknown) parts of the input image would be required to compose the output image. We introduce a new image warping method, Metameric image inpainting - an approach for hole-filling in real-time with foundations in human visual perception. Our method estimates image feature statistics of disoccluded regions from their neighbours. These statistics are inpainted and used to synthesise visuals in real-time that are less noticeable to study participants, particularly in peripheral vision. Our method offers speed improvements over the standard structured image inpainting methods while improving realism over colour-based inpainting such as push-pull. Hence, our work paves the way towards future applications such as depth image-based rendering, 6-DoF 360 rendering, and remote render-streaming

    Broad-band X-ray spectral analysis of the Seyfert 1 galaxy GRS 1734-292

    Get PDF
    We discuss the broad-band X-ray spectrum of GRS 1734−292 obtained from non-simultaneous XMM–Newton and NuSTAR (Nuclear Spectroscopic Telescope Array) observations, performed in 2009 and 2014, respectively. GRS1734−292 is a Seyfert 1 galaxy, located near the Galactic plane at z = 0.0214. The NuSTAR spectrum (3–80 keV) is dominated by a primary power-law continuum with Γ = 1.65 ± 0.05 and a high-energy cut-off Ec=53+11−8 keV, one of the lowest measured by NuSTAR in a Seyfert galaxy. Comptonization models show a temperature of the coronal plasma of kTe=11.9+1.2−0.9 keV and an optical depth, assuming a slab geometry, τ=2.98+0.16−0.19 or a similar temperature and τ=6.7+0.3−0.4 assuming a spherical geometry. The 2009 XMM–Newton spectrum is well described by a flatter intrinsic continuum (⁠Γ=1.47+0.07−0.03⁠) and one absorption line due to Fe XXV Kα produced by a warm absorber. Both data sets show a modest iron Kα emission line at 6.4 keV and the associated Compton reflection, due to reprocessing from neutral circumnuclear material

    Hybrid of swarm intelligent algorithms in medical applications

    Get PDF
    In this paper, we designed a hybrid of swarm intelligence algorithms to diagnose hepatitis, breast tissue, and dermatology conditions in patients with such infection. The effectiveness of hybrid swarm intelligent algorithms was studied since no single algorithm is effective in solving all types of problems. In this study, feed forward and Elman recurrent neural network (ERN) with swarm intelligent algorithms is used for the classification of the mentioned diseases. The capabilities of six (6) global optimization learning algorithms were studied and their performances in training as well as testing were compared. These algorithms include: hybrid of Cuckoo Search algorithm and Levenberg-Marquardt (LM) (CSLM), Cuckoo Search algorithm (CS) and backpropagation (BP) (CSBP), CS and ERN (CSERN), Artificial Bee Colony (ABC) and LM (ABCLM), ABC and BP (ABCBP), Genetic Algorithm (GA) and BP (GANN). Simulation comparative results indicated that the classification accuracy and run time of the CSLM outperform the CSERN, GANN, ABCBP, ABCLM, and CSBP in the breast tissue dataset. On the other hand, the CSERN performs better than the CSLM, GANN, ABCBP, ABCLM, and CSBP in both th
    corecore