2,376 research outputs found

    The Ultraviolet View of the Magellanic Clouds from GALEX: A First Look at the LMC Source Catalog

    Get PDF
    The Galaxy Evolution Exporer (GALEX) has performed unprecedented imaging surveys of the Magellanic Clouds (MC) and their surrounding areas including the Magellanic Bridge (MB) in near-UV (NUV, 1771-2831\AA) and far-UV (FUV, 1344-1786\AA) bands at 5" resolution. Substantially more area was covered in the NUV than FUV, particularly in the bright central regions, because of the GALEX FUV detector failure. The 5σ\sigma depth of the NUV imaging varies between 20.8 and 22.7 (ABmag). Such imaging provides the first sensitive view of the entire content of hot stars in the Magellanic System, revealing the presence of young populations even in sites with extremely low star-formation rate surface density like the MB, owing to high sensitivity of the UV data to hot stars and the dark sky at these wavelengths. The density of UV sources is quite high in many areas of the LMC and SMC. Crowding limits the quality of source detection and photometry from the standard mission pipeline processing. We performed custom-photometry of the GALEX data in the MC survey region (<15<15^{\circ} from the LMC, <10<10^{\circ} from the SMC). After merging multiple detections of sources in overlapping images, the resulting catalog we have produced for the LMC contains nearly 6 million unique NUV point sources within 15^{\circ} and is briefly presented herein. This paper provides a first look at the GALEX MC survey and highlights some of the science investigations that the entire catalog and imaging dataset will make possible.Comment: 16 pages, 8 figures; J. Adv. Space Res. (2013

    The pictures we like are our image: continuous mapping of favorite pictures into self-assessed and attributed personality traits

    Get PDF
    Flickr allows its users to tag the pictures they like as “favorite”. As a result, many users of the popular photo-sharing platform produce galleries of favorite pictures. This article proposes new approaches, based on Computational Aesthetics, capable to infer the personality traits of Flickr users from the galleries above. In particular, the approaches map low-level features extracted from the pictures into numerical scores corresponding to the Big-Five Traits, both self-assessed and attributed. The experiments were performed over 60,000 pictures tagged as favorite by 300 users (the PsychoFlickr Corpus). The results show that it is possible to predict beyond chance both self-assessed and attributed traits. In line with the state-of-the art of Personality Computing, these latter are predicted with higher effectiveness (correlation up to 0.68 between actual and predicted traits)

    SHAPE FROM FOCUS USING LULU OPERATORS AND DISCRETE PULSE TRANSFORM IN THE PRESENCE OF NOISE

    Get PDF
    A study of three dimension (3D) shape recovery is an interesting and challenging area of research. Recovering the depth information of an object from normal two dimensional (2D) images has been studied for a long time with different techniques. One technique for 3D shape recovery is known as Shape from Focus (SFF). SFF is a method that depends on different focused values in reconstructing the shape, surface, and depth of an object. The different focus values are captured by taking different images for the same object by varying the focus length or varying the distance between object and camera. This single view imaging makes the data gathering simpler in SFF compared to other shape recovery techniques. Calculating the shape of the object using different images with different focused values can be done by applying sharpness detection methods to maximize and detect the focused values. However, noise destroys many information in an image and the result of noise corruption can change the focus values in the images. This thesis presents a new 3D shape recovery technique based on focus values in the presence of noise. The proposed technique is based on LULU operators and Discrete Pulse Transform (DPT). LULU operators are nonlinear rank selector operators that hold consistent separation, total variation and shape preservation properties. The proposed techniques show better and more accurate performance in comparison with the existing SFF techniques in noisy environment

    Blur-specific image quality assessment of microscopic hyperspectral images

    Get PDF
    Hyperspectral (HS) imaging (HSI) expands the number of channels captured within the electromagnetic spectrum with respect to regular imaging. Thus, microscopic HSI can improve cancer diagnosis by automatic classification of cells. However, homogeneous focus is difficult to achieve in such images, being the aim of this work to automatically quantify their focus for further image correction. A HS image database for focus assessment was captured. Subjective scores of image focus were obtained from 24 subjects and then correlated to state-of-the-art methods. Maximum Local Variation, Fast Image Sharpness block-based Method and Local Phase Coherence algorithms provided the best correlation results. With respect to execution time, LPC was the fastestBlur-specific image quality assessment of microscopic hyperspectral imagespublishedVersio

    Evaluation and Understandability of Face Image Quality Assessment

    Get PDF
    Face image quality assessment (FIQA) has been an area of interest to researchers as a way to improve the face recognition accuracy. By filtering out the low quality images we can reduce various difficulties faced in unconstrained face recognition, such as, failure in face or facial landmark detection or low presence of useful facial information. In last decade or so, researchers have proposed different methods to assess the face image quality, spanning from fusion of quality measures to using learning based methods. Different approaches have their own strength and weaknesses. But, it is hard to perform a comparative assessment of these methods without a database containing wide variety of face quality, a suitable training protocol that can efficiently utilize this large-scale dataset. In this thesis we focus on developing an evaluation platfrom using a large scale face database containing wide ranging face image quality and try to deconstruct the reason behind the predicted scores of learning based face image quality assessment methods. Contributions of this thesis is two-fold. Firstly, (i) a carefully crafted large scale database dedicated entirely to face image quality assessment has been proposed; (ii) a learning to rank based large-scale training protocol is devel- oped. Finally, (iii) a comprehensive study of 15 face image quality assessment methods using 12 different feature types, and relative ranking based label generation schemes, is performed. Evalua- tion results show various insights about the assessment methods which indicate the significance of the proposed database and the training protocol. Secondly, we have seen that in last few years, researchers have tried various learning based approaches to assess the face image quality. Most of these methods offer either a quality bin or a score summary as a measure of the biometric quality of the face image. But, to the best of our knowledge, so far there has not been any investigation on what are the explainable reasons behind the predicted scores. In this thesis, we propose a method to provide a clear and concise understanding of the predicted quality score of a learning based face image quality assessment. It is believed that this approach can be integrated into the FBI’s understandable template and can help in improving the image acquisition process by providing information on what quality factors need to be addressed

    Quality Adaptive Least Squares Trained Filters for Video Compression Artifacts Removal Using a No-reference Block Visibility Metric

    No full text
    Compression artifacts removal is a challenging problem because videos can be compressed at different qualities. In this paper, a least squares approach that is self-adaptive to the visual quality of the input sequence is proposed. For compression artifacts, the visual quality of an image is measured by a no-reference block visibility metric. According to the blockiness visibility of an input image, an appropriate set of filter coefficients that are trained beforehand is selected for optimally removing coding artifacts and reconstructing object details. The performance of the proposed algorithm is evaluated on a variety of sequences compressed at different qualities in comparison to several other deblocking techniques. The proposed method outperforms the others significantly both objectively and subjectively
    corecore