4,454 research outputs found

    The camera of the fifth H.E.S.S. telescope. Part I: System description

    Full text link
    In July 2012, as the four ground-based gamma-ray telescopes of the H.E.S.S. (High Energy Stereoscopic System) array reached their tenth year of operation in Khomas Highlands, Namibia, a fifth telescope took its first data as part of the system. This new Cherenkov detector, comprising a 614.5 m^2 reflector with a highly pixelized camera in its focal plane, improves the sensitivity of the current array by a factor two and extends its energy domain down to a few tens of GeV. The present part I of the paper gives a detailed description of the fifth H.E.S.S. telescope's camera, presenting the details of both the hardware and the software, emphasizing the main improvements as compared to previous H.E.S.S. camera technology.Comment: 16 pages, 13 figures, accepted for publication in NIM

    Digital implementation of the cellular sensor-computers

    Get PDF
    Two different kinds of cellular sensor-processor architectures are used nowadays in various applications. The first is the traditional sensor-processor architecture, where the sensor and the processor arrays are mapped into each other. The second is the foveal architecture, in which a small active fovea is navigating in a large sensor array. This second architecture is introduced and compared here. Both of these architectures can be implemented with analog and digital processor arrays. The efficiency of the different implementation types, depending on the used CMOS technology, is analyzed. It turned out, that the finer the technology is, the better to use digital implementation rather than analog

    An electronic pan/tilt/zoom camera system

    Get PDF
    A camera system for omnidirectional image viewing applications that provides pan, tilt, zoom, and rotational orientation within a hemispherical field of view (FOV) using no moving parts was developed. The imaging device is based on the effect that from a fisheye lens, which produces a circular image of an entire hemispherical FOV, can be mathematically corrected using high speed electronic circuitry. An incoming fisheye image from any image acquisition source is captured in memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. As a result, this device can accomplish the functions of pan, tilt, rotation, and zoom throughout a hemispherical FOV without the need for any mechanical mechanisms. A programmable transformation processor provides flexible control over viewing situations. Multiple images, each with different image magnifications and pan tilt rotation parameters, can be obtained from a single camera. The image transformation device can provide corrected images at frame rates compatible with RS-170 standard video equipment

    Acoustical structured illumination for super-resolution ultrasound imaging.

    Get PDF
    Structured illumination microscopy is an optical method to increase the spatial resolution of wide-field fluorescence imaging beyond the diffraction limit by applying a spatially structured illumination light. Here, we extend this concept to facilitate super-resolution ultrasound imaging by manipulating the transmitted sound field to encode the high spatial frequencies into the observed image through aliasing. Post processing is applied to precisely shift the spectral components to their proper positions in k-space and effectively double the spatial resolution of the reconstructed image compared to one-way focusing. The method has broad application, including the detection of small lesions for early cancer diagnosis, improving the detection of the borders of organs and tumors, and enhancing visualization of vascular features. The method can be implemented with conventional ultrasound systems, without the need for additional components. The resulting image enhancement is demonstrated with both test objects and ex vivo rat metacarpals and phalanges

    The Allen Telescope Array: The First Widefield, Panchromatic, Snapshot Radio Camera for Radio Astronomy and SETI

    Get PDF
    The first 42 elements of the Allen Telescope Array (ATA-42) are beginning to deliver data at the Hat Creek Radio Observatory in Northern California. Scientists and engineers are actively exploiting all of the flexibility designed into this innovative instrument for simultaneously conducting surveys of the astrophysical sky and conducting searches for distant technological civilizations. This paper summarizes the design elements of the ATA, the cost savings made possible by the use of COTS components, and the cost/performance trades that eventually enabled this first snapshot radio camera. The fundamental scientific program of this new telescope is varied and exciting; some of the first astronomical results will be discussed.Comment: Special Issue of Proceedings of the IEEE: "Advances in Radio Telescopes", Baars,J. Thompson,R., D'Addario, L., eds, 2009, in pres

    The QUEST large area CCD camera

    Get PDF
    We have designed, constructed, and put into operation a very large area CCD camera that covers the field of view of the 1.2 m Samuel Oschin Schmidt Telescope at the Palomar Observatory. The camera consists of 112 CCDs arranged in a mosaic of four rows with 28 CCDs each. The CCDs are 600 x 2400 pixel Sarnoff thinned, back-illuminated devices with 13 ”m x 13 ”m pixels. The camera covers an area of 4.6° x 3.6° on the sky with an active area of 9.6 deg_2. This camera has been installed at the prime focus of the telescope and commissioned, and scientific-quality observations on the Palomar-QUEST Variability Sky Survey were started in 2003 September. The design considerations, construction features, and performance parameters of this camera are described in this paper

    Accelerated volumetric reconstruction from uncalibrated camera views

    Get PDF
    While both work with images, computer graphics and computer vision are inverse problems. Computer graphics starts traditionally with input geometric models and produces image sequences. Computer vision starts with input image sequences and produces geometric models. In the last few years, there has been a convergence of research to bridge the gap between the two fields. This convergence has produced a new field called Image-based Rendering and Modeling (IBMR). IBMR represents the effort of using the geometric information recovered from real images to generate new images with the hope that the synthesized ones appear photorealistic, as well as reducing the time spent on model creation. In this dissertation, the capturing, geometric and photometric aspects of an IBMR system are studied. A versatile framework was developed that enables the reconstruction of scenes from images acquired with a handheld digital camera. The proposed system targets applications in areas such as Computer Gaming and Virtual Reality, from a lowcost perspective. In the spirit of IBMR, the human operator is allowed to provide the high-level information, while underlying algorithms are used to perform low-level computational work. Conforming to the latest architecture trends, we propose a streaming voxel carving method, allowing a fast GPU-based processing on commodity hardware

    Spatially Smart Optical Sensing and Scanning

    Get PDF
    Methods, devices and systems of an optical sensor for spatially smart 3-D object measurements using variable focal length lenses to target both specular and diffuse objects by matching transverse dimensions of the sampling optical beam to the transverse size of the flat target for given axial target distance for instantaneous spatial mapping of flat target, zone. The sensor allows volumetric data compressed remote sensing of object transverse dimensions including cross-sectional size, motion transverse displacement, inter-objects transverse gap distance, 3-D animation data acquisition, laser-based 3-D machining, and 3-D inspection and testing. An embodiment provides a 2-D optical display using 2-D laser scanning and 3-D beam forming optics engaged with sensor optics to measure distance of display screen from the laser source and scanning optics by adjusting its focus to produce the smallest focused beam spot on the display screen. With known screen distance, the angular scan range for the scan mirrors can be computed to generate the number of scanned spots in the 2-D display
    • 

    corecore