2,284 research outputs found

    Statistical/Geometric Techniques for Object Representation and Recognition

    Get PDF
    Object modeling and recognition are key areas of research in computer vision and graphics with wide range of applications. Though research in these areas is not new, traditionally most of it has focused on analyzing problems under controlled environments. The challenges posed by real life applications demand for more general and robust solutions. The wide variety of objects with large intra-class variability makes the task very challenging. The difficulty in modeling and matching objects also vary depending on the input modality. In addition, the easy availability of sensors and storage have resulted in tremendous increase in the amount of data that needs to be processed which requires efficient algorithms suitable for large-size databases. In this dissertation, we address some of the challenges involved in modeling and matching of objects in realistic scenarios. Object matching in images require accounting for large variability in the appearance due to changes in illumination and view point. Any real world object is characterized by its underlying shape and albedo, which unlike the image intensity are insensitive to changes in illumination conditions. We propose a stochastic filtering framework for estimating object albedo from a single intensity image by formulating the albedo estimation as an image estimation problem. We also show how this albedo estimate can be used for illumination insensitive object matching and for more accurate shape recovery from a single image using standard shape from shading formulation. We start with the simpler problem where the pose of the object is known and only the illumination varies. We then extend the proposed approach to handle unknown pose in addition to illumination variations. We also use the estimated albedo maps for another important application, which is recognizing faces across age progression. Many approaches which address the problem of modeling and recognizing objects from images assume that the underlying objects are of diffused texture. But most real world objects exhibit a combination of diffused and specular properties. We propose an approach for separating the diffused and specular reflectance from a given color image so that the algorithms proposed for objects of diffused texture become applicable to a much wider range of real world objects. Representing and matching the 2D and 3D geometry of objects is also an integral part of object matching with applications in gesture recognition, activity classification, trademark and logo recognition, etc. The challenge in matching 2D/3D shapes lies in accounting for the different rigid and non-rigid deformations, large intra-class variability, noise and outliers. In addition, since shapes are usually represented as a collection of landmark points, the shape matching algorithm also has to deal with the challenges of missing or unknown correspondence across these data points. We propose an efficient shape indexing approach where the different feature vectors representing the shape are mapped to a hash table. For a query shape, we show how the similar shapes in the database can be efficiently retrieved without the need for establishing correspondence making the algorithm extremely fast and scalable. We also propose an approach for matching and registration of 3D point cloud data across unknown or missing correspondence using an implicit surface representation. Finally, we discuss possible future directions of this research

    Modelling the human perception of shape-from-shading

    Get PDF
    Shading conveys information on 3-D shape and the process of recovering this information is called shape-from-shading (SFS). This thesis divides the process of human SFS into two functional sub-units (luminance disambiguation and shape computation) and studies them individually. Based on results of a series of psychophysical experiments it is proposed that the interaction between first- and second-order channels plays an important role in disambiguating luminance. Based on this idea, two versions of a biologically plausible model are developed to explain the human performances observed here and elsewhere. An algorithm sharing the same idea is also developed as a solution to the problem of intrinsic image decomposition in the field of image processing. With regard to the shape computation unit, a link between luminance variations and estimated surface norms is identified by testing participants on simple gratings with several different luminance profiles. This methodology is unconventional but can be justified in the light of past studies of human SFS. Finally a computational algorithm for SFS containing two distinct operating modes is proposed. This algorithm is broadly consistent with the known psychophysics on human SFS

    Bulge plus disc and S\'ersic decomposition catalogues for 16,908 galaxies in the SDSS Stripe 82 co-adds: A detailed study of the ugrizugriz structural measurements

    Full text link
    Quantitative characterization of galaxy morphology is vital in enabling comparison of observations to predictions from galaxy formation theory. However, without significant overlap between the observational footprints of deep and shallow galaxy surveys, the extent to which structural measurements for large galaxy samples are robust to image quality (e.g., depth, spatial resolution) cannot be established. Deep images from the Sloan Digital Sky Survey (SDSS) Stripe 82 co-adds provide a unique solution to this problem - offering 1.6−1.81.6-1.8 magnitudes improvement in depth with respect to SDSS Legacy images. Having similar spatial resolution to Legacy, the co-adds make it possible to examine the sensitivity of parametric morphologies to depth alone. Using the Gim2D surface-brightness decomposition software, we provide public morphology catalogs for 16,908 galaxies in the Stripe 82 ugrizugriz co-adds. Our methods and selection are completely consistent with the Simard et al. (2011) and Mendel et al. (2014) photometric decompositions. We rigorously compare measurements in the deep and shallow images. We find no systematics in total magnitudes and sizes except for faint galaxies in the uu-band and the brightest galaxies in each band. However, characterization of bulge-to-total fractions is significantly improved in the deep images. Furthermore, statistics used to determine whether single-S\'ersic or two-component (e.g., bulge+disc) models are required become more bimodal in the deep images. Lastly, we show that asymmetries are enhanced in the deep images and that the enhancement is positively correlated with the asymmetries measured in Legacy images.Comment: 27 pages, 14 figures. MNRAS accepted. Our catalogs are available in TXT and SQL formats at http://orca.phys.uvic.ca/~cbottrel/share/Stripe82/Catalogs
    • …
    corecore