10,940 research outputs found

    An Empirical Pixel-Based Correction for Imperfect CTE. I. HST's Advanced Camera for Surveys

    Full text link
    We use an empirical approach to characterize the effect of charge-transfer efficiency (CTE) losses in images taken with the Wide-Field Channel of the Advanced Camera for Surveys. The study is based on profiles of warm pixels in 168 dark exposures taken between September and October 2009. The dark exposures allow us to explore charge traps that affect electrons when the background is extremely low. We develop a model for the readout process that reproduces the observed trails out to 70 pixels. We then invert the model to convert the observed pixel values in an image into an estimate of the original pixel values. We find that when we apply the image-restoration process to science images with a variety of stars on a variety of background levels, it restores flux, position, and shape. This means that the observed trails contain essentially all of the flux lost to inefficient CTE. The Space Telescope Science Institute is currently evaluating this algorithm with the aim of optimizing it and eventually providing enhanced data products. The empirical procedure presented here should also work for other epochs (eg., pre-SM4), though the parameters may have to be recomputed for the time when ACS was operated at a higher temperature than the current -81 C. Finally, this empirical approach may also hold promise for other instruments, such as WFPC2, STIS, the ACS's HRC, and even WFC3/UVIS.Comment: 86 pages, 25 figures (6 in low resolution). PASP accepted on July 21, 201

    Navigation of mobile robots using artificial intelligence technique.

    Get PDF
    The ability to acquire a representation of the spatial environment and the ability to localize within it are essential for successful navigation in a-priori unknown environments. This document presents a computer vision method and related algorithms for the navigation of a robot in a static environment. Our environment is a simple white colored area with black obstacles and robot (with some identification mark-a circle and a rectangle of orange color which helps in giving it a direction) present over it. This environment is grabbed in a camera which sends image to the desktop using data cable. The image is then converted to the binary format from jpeg format using software which is then processed in the computer using MATLAB. The data acquired from the program is then used as an input for another program which controls the robot drive motors using wireless controls. Robot then tries to reach its destination avoiding obstacles in its path. The algorithm presented in this paper uses the distance transform methodology to generate paths for the robot to execute. This paper describes an algorithm for approximately finding the fastest route for a vehicle to travel one point to a destination point in a digital plain map, avoiding obstacles along the way. In our experimental setup the camera used is a SONY HANDYCAM. This camera grabs the image and specifies the location of the robot (starting point) in the plain and its destination point. The destination point used in our experimental setup is a table tennis ball, but it can be any other entity like a single person, a combat unit or a vehicle

    Airborne LiDAR for DEM generation: some critical issues

    Get PDF
    Airborne LiDAR is one of the most effective and reliable means of terrain data collection. Using LiDAR data for DEM generation is becoming a standard practice in spatial related areas. However, the effective processing of the raw LiDAR data and the generation of an efficient and high-quality DEM remain big challenges. This paper reviews the recent advances of airborne LiDAR systems and the use of LiDAR data for DEM generation, with special focus on LiDAR data filters, interpolation methods, DEM resolution, and LiDAR data reduction. Separating LiDAR points into ground and non-ground is the most critical and difficult step for DEM generation from LiDAR data. Commonly used and most recently developed LiDAR filtering methods are presented. Interpolation methods and choices of suitable interpolator and DEM resolution for LiDAR DEM generation are discussed in detail. In order to reduce the data redundancy and increase the efficiency in terms of storage and manipulation, LiDAR data reduction is required in the process of DEM generation. Feature specific elements such as breaklines contribute significantly to DEM quality. Therefore, data reduction should be conducted in such a way that critical elements are kept while less important elements are removed. Given the highdensity characteristic of LiDAR data, breaklines can be directly extracted from LiDAR data. Extraction of breaklines and integration of the breaklines into DEM generation are presented

    Collaborative Representation based Classification for Face Recognition

    Full text link
    By coding a query sample as a sparse linear combination of all training samples and then classifying it by evaluating which class leads to the minimal coding residual, sparse representation based classification (SRC) leads to interesting results for robust face recognition. It is widely believed that the l1- norm sparsity constraint on coding coefficients plays a key role in the success of SRC, while its use of all training samples to collaboratively represent the query sample is rather ignored. In this paper we discuss how SRC works, and show that the collaborative representation mechanism used in SRC is much more crucial to its success of face classification. The SRC is a special case of collaborative representation based classification (CRC), which has various instantiations by applying different norms to the coding residual and coding coefficient. More specifically, the l1 or l2 norm characterization of coding residual is related to the robustness of CRC to outlier facial pixels, while the l1 or l2 norm characterization of coding coefficient is related to the degree of discrimination of facial features. Extensive experiments were conducted to verify the face recognition accuracy and efficiency of CRC with different instantiations.Comment: It is a substantial revision of a previous conference paper (L. Zhang, M. Yang, et al. "Sparse Representation or Collaborative Representation: Which Helps Face Recognition?" in ICCV 2011
    corecore