207 research outputs found

    Tiled top-down pyramids and segmentation of large histological images

    No full text
    International audienceRecent microscopic imaging systems such as whole slide scanners provide very large (up to 18GB) high resolution images. Such amounts of memory raise major issues that prevent usual image representation models from being used. Moreover, using such high resolution images, global image features, such as tissues, do not clearly appear at full resolution. Such images contain thus different hierarchical information at different resolutions. This paper presents the model of tiled top-down pyramids which provides a framework to handle such images. This model encodes a hierarchy of partitions of large images defined at different resolutions. We also propose a generic construction scheme of such pyramids whose validity is evaluated on an histological image application

    Analysis and synthesis of iris images

    Get PDF
    Of all the physiological traits of the human body that help in personal identification, the iris is probably the most robust and accurate. Although numerous iris recognition algorithms have been proposed, the underlying processes that define the texture of irises have not been extensively studied. In this thesis, multiple pair-wise pixel interactions have been used to describe the textural content of the iris image thereby resulting in a Markov Random Field (MRF) model for the iris image. This information is expected to be useful for the development of user-specific models for iris images, i.e. the matcher could be tuned to accommodate the characteristics of each user\u27s iris image in order to improve matching performance. We also use MRF modeling to construct synthetic irises based on iris primitive extracted from real iris images. The synthesis procedure is deterministic and avoids the sampling of a probability distribution making it computationally simple. We demonstrate that iris textures in general are significantly different from other irregular textural patterns. Clustering experiments indicate that the synthetic irises generated using the proposed technique are similar in textural content to real iris images

    Methods for Real-time Visualization and Interaction with Landforms

    Get PDF
    This thesis presents methods to enrich data modeling and analysis in the geoscience domain with a particular focus on geomorphological applications. First, a short overview of the relevant characteristics of the used remote sensing data and basics of its processing and visualization are provided. Then, two new methods for the visualization of vector-based maps on digital elevation models (DEMs) are presented. The first method uses a texture-based approach that generates a texture from the input maps at runtime taking into account the current viewpoint. In contrast to that, the second method utilizes the stencil buffer to create a mask in image space that is then used to render the map on top of the DEM. A particular challenge in this context is posed by the view-dependent level-of-detail representation of the terrain geometry. After suitable visualization methods for vector-based maps have been investigated, two landform mapping tools for the interactive generation of such maps are presented. The user can carry out the mapping directly on the textured digital elevation model and thus benefit from the 3D visualization of the relief. Additionally, semi-automatic image segmentation techniques are applied in order to reduce the amount of user interaction required and thus make the mapping process more efficient and convenient. The challenge in the adaption of the methods lies in the transfer of the algorithms to the quadtree representation of the data and in the application of out-of-core and hierarchical methods to ensure interactive performance. Although high-resolution remote sensing data are often available today, their effective resolution at steep slopes is rather low due to the oblique acquisition angle. For this reason, remote sensing data are suitable to only a limited extent for visualization as well as landform mapping purposes. To provide an easy way to supply additional imagery, an algorithm for registering uncalibrated photos to a textured digital elevation model is presented. A particular challenge in registering the images is posed by large variations in the photos concerning resolution, lighting conditions, seasonal changes, etc. The registered photos can be used to increase the visual quality of the textured DEM, in particular at steep slopes. To this end, a method is presented that combines several georegistered photos to textures for the DEM. The difficulty in this compositing process is to create a consistent appearance and avoid visible seams between the photos. In addition to that, the photos also provide valuable means to improve landform mapping. To this end, an extension of the landform mapping methods is presented that allows the utilization of the registered photos during mapping. This way, a detailed and exact mapping becomes feasible even at steep slopes

    Efficient Algorithms for Large-Scale Image Analysis

    Get PDF
    This work develops highly efficient algorithms for analyzing large images. Applications include object-based change detection and screening. The algorithms are 10-100 times as fast as existing software, sometimes even outperforming FGPA/GPU hardware, because they are designed to suit the computer architecture. This thesis describes the implementation details and the underlying algorithm engineering methodology, so that both may also be applied to other applications

    Regular Hierarchical Surface Models: A conceptual model of scale variation in a GIS and its application to hydrological geomorphometry

    Get PDF
    Environmental and geographical process models inevitably involve parameters that vary spatially. One example is hydrological modelling, where parameters derived from the shape of the ground such as flow direction and flow accumulation are used to describe the spatial complexity of drainage networks. One way of handling such parameters is by using a Digital Elevation Model (DEM), such modelling is the basis of the science of geomorphometry. A frequently ignored but inescapable challenge when modellers work with DEMs is the effect of scale and geometry on the model outputs. Many parameters vary with scale as much as they vary with position. Modelling variability with scale is necessary to simplify and generalise surfaces, and desirable to accurately reconcile model components that are measured at different scales. This thesis develops a surface model that is optimised to represent scale in environmental models. A Regular Hierarchical Surface Model (RHSM) is developed that employs a regular tessellation of space and scale that forms a self-similar regular hierarchy, and incorporates Level Of Detail (LOD) ideas from computer graphics. Following convention from systems science, the proposed model is described in its conceptual, mathematical, and computational forms. The RHSM development was informed by a categorisation of Geographical Information Science (GISc) surfaces within a cohesive framework of geometry, structure, interpolation, and data model. The positioning of the RHSM within this broader framework made it easier to adapt algorithms designed for other surface models to conform to the new model. The RHSM has an implicit data model that utilises a variation of Middleton and Sivaswamy (2001)’s intrinsically hierarchical Hexagonal Image Processing referencing system, which is here generalised for rectangular and triangular geometries. The RHSM provides a simple framework to form a pyramid of coarser values in a process characterised as a scaling function. In addition, variable density realisations of the hierarchical representation can be generated by defining an error value and decision rule to select the coarsest appropriate scale for a given region to satisfy the modeller’s intentions. The RHSM is assessed using adaptions of the geomorphometric algorithms flow direction and flow accumulation. The effects of scale and geometry on the anistropy and accuracy of model results are analysed on dispersive and concentrative cones, and Light Detection And Ranging (LiDAR) derived surfaces of the urban area of Dunedin, New Zealand. The RHSM modelling process revealed aspects of the algorithms not obvious within a single geometry, such as, the influence of node geometry on flow direction results, and a conceptual weakness of flow accumulation algorithms on dispersive surfaces that causes asymmetrical results. In addition, comparison of algorithm behaviour between geometries undermined the hypothesis that variance of cell cross section with direction is important for conversion of cell accumulations to point values. The ability to analyse algorithms for scale and geometry and adapt algorithms within a cohesive conceptual framework offers deeper insight into algorithm behaviour than previously achieved. The deconstruction of algorithms into geometry neutral forms and the application of scaling functions are important contributions to the understanding of spatial parameters within GISc

    Object Counting with Deep Learning

    Get PDF
    This thesis explores various empirical aspects of deep learning or convolutional network based models for efficient object counting. First, we train moderately large convolutional networks on comparatively smaller datasets containing few hundred samples from scratch with conventional image processing based data augmentation. Then, we extend this approach for unconstrained, outdoor images using more advanced architectural concepts. Additionally, we propose an efficient, randomized data augmentation strategy based on sub-regional pixel distribution for low-resolution images. Next, the effectiveness of depth-to-space shuffling of feature elements for efficient segmentation is investigated for simpler problems like binary segmentation -- often required in the counting framework. This depth-to-space operation violates the basic assumption of encoder-decoder type of segmentation architectures. Consequently, it helps to train the encoder model as a sparsely connected graph. Nonetheless, we have found comparable accuracy to that of the standard encoder-decoder architectures with our depth-to-space models. After that, the subtleties regarding the lack of localization information in the conventional scalar count loss for one-look models are illustrated. At this point, without using additional annotations, a possible solution is proposed based on the regulation of a network-generated heatmap in the form of a weak, subsidiary loss. The models trained with this auxiliary loss alongside the conventional loss perform much better compared to their baseline counterparts, both qualitatively and quantitatively. Lastly, the intricacies of tiled prediction for high-resolution images are studied in detail, and a simple and effective trick of eliminating the normalization factor in an existing computational block is demonstrated. All of the approaches employed here are thoroughly benchmarked across multiple heterogeneous datasets for object counting against previous, state-of-the-art approaches

    A Method for detection and quantification of building damage using post-disaster LiDAR data

    Get PDF
    There is a growing need for rapid and accurate damage assessment following natural disasters, terrorist attacks, and other crisis situations. The use of light detection and ranging (LiDAR) data to detect and quantify building damage following a natural disaster was investigated in this research. Using LiDAR data collected by the Rochester Institute of Technology (RIT) just days after the January 12, 2010 Haiti earthquake, a set of processes was developed for extracting buildings in urban environments and assessing structural damage. Building points were separated from the rest of the point cloud using a combination of point classification techniques involving height, intensity, and multiple return information, as well as thresholding and morphological filtering operations. Damage was detected by measuring the deviation between building roof points and dominant planes found using a normal vector and height variance approach. The devised algorithms were incorporated into a Matlab graphical user interface (GUI), which guided the workflow and allowed for user interaction. The semi-autonomous tool ingests a discrete-return LiDAR point cloud of a post-disaster scene, and outputs a building damage map highlighting damaged and collapsed buildings. The entire approach was demonstrated on a set of six validation sites, carefully selected from the Haiti LiDAR data. A combined 85.6% of the truth buildings in all of the sites were detected, with a standard deviation of 15.3%. Damage classification results were evaluated against the Global Earth Observation - Catastrophe Assessment Network (GEO-CAN) and Earthquake Engineering Field Investigation Team (EEFIT) truth assessments. The combined overall classification accuracy for all six sites was 68.3%, with a standard deviation of 9.6%. Results were impacted by imperfect validation data, inclusion of non-building points, and very diverse environments, e.g., varying building types, sizes, and densities. Nevertheless, the processes exhibited significant potential for detecting buildings and assessing building-level damage
    • …
    corecore