517,906 research outputs found

    Overview of ghost correction for HDR video stream generation

    No full text
    International audienceMost digital cameras use low dynamic range image sensors, these LDR sensors can capture only a limited luminance dynamic range of the scene[1], to about two orders of magnitude (about 256 to 1024 levels). However, the dynamic range of real-world scenes varies over several orders of magnitude (10.000 levels). To overcome this limitation, several methods exist for creating high dynamic range (HDR) image (expensive method uses dedicated HDR image sensor and low-cost solutions using a conventional LDR image sensor). Large number of low-cost solutions applies a temporal exposure bracketing. The HDR image may be constructed with a HDR standard method (an additional step called tone mapping is required to display the HDR image on conventional system), or by fusing LDR images in different exposures time directly, providing HDR-like[2] images which can be handled directly by LDR image monitors. Temporal exposure bracketing solution is used for static scenes but it cannot be applied directly for dynamic scenes or HDR videos since camera or object motion in bracketed exposures creates artifacts called ghost[3], in HDR image. There are a several technics allowing the detection and removing ghost artifacts (Variance based ghost detection, Entropy based ghost detection, Bitmap based ghost detection, Graph-Cuts based ghost detection …) [4], nevertheless most of these methods are expensive in calculating time and they cannot be considered for real-time implementations. The originality and the final goal of our work are to upgrade our current smart camera allowing HDR video stream generation with a sensor full-resolution (1280x1024) at 60 fps [5]. The HDR stream is performed using exposure bracketing techniques (obtained with conventional LDR image sensor) combined with a tone mapping algorithm. In this paper, we propose an overview of the different methods to correct ghost artifacts which are available in the state of art. The selection of algorithms is done concerning our final goal which is real-time hardware implementation of the ghost detection and removing phases.

    Pencil-Beam Surveys for Trans-Neptunian Objects: Novel Methods for Optimization and Characterization

    Full text link
    Digital co-addition of astronomical images is a common technique for increasing signal-to-noise and image depth. A modification of this simple technique has been applied to the detection of minor bodies in the Solar System: first stationary objects are removed through the subtraction of a high-SN template image, then the sky motion of the Solar System bodies of interest is predicted and compensated for by shifting pixels in software prior to the co-addition step. This "shift-and-stack" approach has been applied with great success in directed surveys for minor Solar System bodies. In these surveys, the shifts have been parameterized in a variety of ways. However, these parameterizations have not been optimized and in most cases cannot be effectively applied to data sets with long observation arcs due to objects' real trajectories diverging from linear tracks on the sky. This paper presents two novel probabilistic approaches for determining a near-optimum set of shift-vectors to apply to any image set given a desired region of orbital space to search. The first method is designed for short observational arcs, and the second for observational arcs long enough to require non-linear shift-vectors. Using these techniques and other optimizations, we derive optimized grids for previous surveys that have used "shift-and-stack" approaches to illustrate the improvements that can be made with our method, and at the same time derive new limits on the range of orbital parameters these surveys searched. We conclude with a simulation of a future applications for this approach with LSST, and show that combining multiple nights of data from such next-generation facilities is within the realm of computational feasibility.Comment: Accepted for publication in PASP March 1, 2010

    Novel scanning techniques for CCD image capture and display

    Get PDF
    This work details two investigations into image capture, taken from the fields of x-ray and laser research, and also details two scanning systems: a wire surface generator and a video security device. Firstly a camera system is described that can display images, digitize them and provide real time false shading. This camera is shown to have a linear intensity response and to have a maximum saturation level below the digitizing range. Some example outputs are then illustrated. The ability to irradiate CCDs with direct X-ray radiation is also investigated. A camera is developed that vertically integrates such images and is shown to give an increase in the processing speed of existing equipment and to reduce experiment times by a factor of 388. Taking this idea further, a fast one dimensional camera is developed. This camera couples laser pulses onto a CCDs via a fibre optic faceplate and a 25 mum slit. Unusual scanning techniques are used to achieve image storage within the sensor itself and a method for correcting dark current and other errors is proposed. Next a mechanism for displaying wire surface representations of intensity) images is investigated. Results obtained from real time, hidden line removing hardware are illustrated, along with improved algorithms for shaded surface generation. This is then developed into a security device protecting VDUs from radio based surveillance. This is achieved by randomizing the display order of raster lines along with a hardware solution for random sequence generation. Finally the generation of Uniformly distributed random numbers is achieved by processing readings from a digitized. Normally distributed voltage source. The effects of this processing are investigated and an analysis of the underlying theory is used to determine an optimal setting for the gain stage

    3D modeling of indoor environments by a mobile platform with a laser scanner and panoramic camera

    Get PDF
    One major challenge of 3DTV is content acquisition. Here, we present a method to acquire a realistic, visually convincing D model of indoor environments based on a mobile platform that is equipped with a laser range scanner and a panoramic camera. The data of the 2D laser scans are used to solve the simultaneous lo- calization and mapping problem and to extract walls. Textures for walls and floor are built from the images of a calibrated panoramic camera. Multiresolution blending is used to hide seams in the gen- erated textures. The scene is further enriched by 3D-geometry cal- culated from a graph cut stereo technique. We present experimental results from a moderately large real environment.

    New instruments and technologies for Cultural Heritage survey: full integration between point clouds and digital photogrammetry

    Get PDF
    In the last years the Geomatic Research Group of the Politecnico di Torino faced some new research topics about new instruments for point cloud generation (e.g. Time of Flight cameras) and strong integration between multi-image matching techniques and 3D Point Cloud information in order to solve the ambiguities of the already known matching algorithms. ToF cameras can be a good low cost alternative to LiDAR instruments for the generation of precise and accurate point clouds: up to now the application range is still limited but in a near future they will be able to satisfy the most part of the Cultural Heritage metric survey requirements. On the other hand multi-image matching techniques with a correct and deep integration of the point cloud information can give the correct solution for an "intelligent" survey of the geometric object break-lines, which are the correct starting point for a complete survey. These two research topics are strictly connected to a modern Cultural Heritage 3D survey approach. In this paper after a short analysis of the achieved results, an alternative possible scenario for the development of the metric survey approach inside the wider topic of Cultural Heritage Documentation is reporte

    Creating Simplified 3D Models with High Quality Textures

    Get PDF
    This paper presents an extension to the KinectFusion algorithm which allows creating simplified 3D models with high quality RGB textures. This is achieved through (i) creating model textures using images from an HD RGB camera that is calibrated with Kinect depth camera, (ii) using a modified scheme to update model textures in an asymmetrical colour volume that contains a higher number of voxels than that of the geometry volume, (iii) simplifying dense polygon mesh model using quadric-based mesh decimation algorithm, and (iv) creating and mapping 2D textures to every polygon in the output 3D model. The proposed method is implemented in real-time by means of GPU parallel processing. Visualization via ray casting of both geometry and colour volumes provides users with a real-time feedback of the currently scanned 3D model. Experimental results show that the proposed method is capable of keeping the model texture quality even for a heavily decimated model and that, when reconstructing small objects, photorealistic RGB textures can still be reconstructed.Comment: 2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Page 1 -
    • …
    corecore