1,032 research outputs found

    Propagation Kernels

    Full text link
    We introduce propagation kernels, a general graph-kernel framework for efficiently measuring the similarity of structured data. Propagation kernels are based on monitoring how information spreads through a set of given graphs. They leverage early-stage distributions from propagation schemes such as random walks to capture structural information encoded in node labels, attributes, and edge information. This has two benefits. First, off-the-shelf propagation schemes can be used to naturally construct kernels for many graph types, including labeled, partially labeled, unlabeled, directed, and attributed graphs. Second, by leveraging existing efficient and informative propagation schemes, propagation kernels can be considerably faster than state-of-the-art approaches without sacrificing predictive performance. We will also show that if the graphs at hand have a regular structure, for instance when modeling image or video data, one can exploit this regularity to scale the kernel computation to large databases of graphs with thousands of nodes. We support our contributions by exhaustive experiments on a number of real-world graphs from a variety of application domains

    Forward model for quantitative pulse-echo speed-of-sound imaging

    Get PDF
    Computed ultrasound tomography in echo mode (CUTE) allows determining the spatial distribution of speed-of-sound (SoS) inside tissue using handheld pulse-echo ultrasound (US). This technique is based on measuring the changing phase of beamformed echoes obtained under varying transmit (Tx) and/or receive (Rx) steering angles. The SoS is reconstructed by inverting a forward model describing how the spatial distribution of SoS is related to the spatial distribution of the echo phase shift. CUTE holds promise as a novel diagnostic modality that complements conventional US in a single, real-time handheld system. Here we demonstrate that, in order to obtain robust quantitative results, the forward model must contain two features that were not taken into account so far: a) the phase shift must be detected between pairs of Tx and Rx angles that are centred around a set of common mid-angles, and b) it must account for an additional phase shift induced by the error of the reconstructed position of echoes. In a phantom study mimicking liver imaging, this new model leads to a substantially improved quantitative SoS reconstruction compared to the model that has been used so far. The importance of the new model as a prerequisite for an accurate diagnosis is corroborated in preliminary volunteer results

    Digital fragile watermarking scheme for authentication of JPEG images

    Full text link

    Predicting blur visual discomfort for natural scenes by the loss of positional information

    Get PDF
    The perception of blur due to accommodation failures, insufficient optical correction or imperfect image reproduction is a common source of visual discomfort, usually attributed to an anomalous and annoying distribution of the image spectrum in the spatial frequency domain. In the present paper, this discomfort is related to a loss of the localization accuracy of the observed patterns. It is assumed, as a starting perceptual principle, that the visual system is optimally adapted to pattern localization in a natural environment. Thus, since the best possible accuracy of the image patterns localization is indicated by the positional Fisher Information, it is argued that blur discomfort is strictly related to a loss of this information. Following this concept, a receptive field functional model is adopted to predict the visual discomfort. It is a complex-valued operator, orientation-selective both in the space domain and in the spatial frequency domain. Starting from the case of Gaussian blur, the analysis is extended to a generic type of blur by applying a positional Fisher Information equivalence criterion. Out-of-focus blur and astigmatic blur are presented as significant examples. The validity of the proposed model is verified by comparing its predictions with subjective ratings. The model fits linearly with the experiments reported in independent databases, based on different protocols and settings

    Toward Global Localization of Unmanned Aircraft Systems using Overhead Image Registration with Deep Learning Convolutional Neural Networks

    Get PDF
    Global localization, in which an unmanned aircraft system (UAS) estimates its unknown current location without access to its take-off location or other locational data from its flight path, is a challenging problem. This research brings together aspects from the remote sensing, geoinformatics, and machine learning disciplines by framing the global localization problem as a geospatial image registration problem in which overhead aerial and satellite imagery serve as a proxy for UAS imagery. A literature review is conducted covering the use of deep learning convolutional neural networks (DLCNN) with global localization and other related geospatial imagery applications. Differences between geospatial imagery taken from the overhead perspective and terrestrial imagery are discussed, as well as difficulties in using geospatial overhead imagery for image registration due to a lack of suitable machine learning datasets. Geospatial analysis is conducted to identify suitable areas for future UAS imagery collection. One of these areas, Jerusalem northeast (JNE) is selected as the area of interest (AOI) for this research. Multi-modal, multi-temporal, and multi-resolution geospatial overhead imagery is aggregated from a variety of publicly available sources and processed to create a controlled image dataset called Jerusalem northeast rural controlled imagery (JNE RCI). JNE RCI is tested with handcrafted feature-based methods SURF and SIFT and a non-handcrafted feature-based pre-trained fine-tuned VGG-16 DLCNN on coarse-grained image registration. Both handcrafted and non-handcrafted feature based methods had difficulty with the coarse-grained registration process. The format of JNE RCI is determined to be unsuitable for the coarse-grained registration process with DLCNNs and the process to create a new supervised machine learning dataset, Jerusalem northeast machine learning (JNE ML) is covered in detail. A multi-resolution grid based approach is used, where each grid cell ID is treated as the supervised training label for that respective resolution. Pre-trained fine-tuned VGG-16 DLCNNs, two custom architecture two-channel DLCNNs, and a custom chain DLCNN are trained on JNE ML for each spatial resolution of subimages in the dataset. All DLCNNs used could more accurately coarsely register the JNE ML subimages compared to the pre-trained fine-tuned VGG-16 DLCNN on JNE RCI. This shows the process for creating JNE ML is valid and is suitable for using machine learning with the coarse-grained registration problem. All custom architecture two-channel DLCNNs and the custom chain DLCNN were able to more accurately coarsely register the JNE ML subimages compared to the fine-tuned pre-trained VGG-16 approach. Both the two-channel custom DLCNNs and the chain DLCNN were able to generalize well to new imagery that these networks had not previously trained on. Through the contributions of this research, a foundation is laid for future work to be conducted on the UAS global localization problem within the rural forested JNE AOI

    Connectivity-preserving transformations of binary images

    Get PDF
    A binary image \emph{I} is BaB_a, WbW_b-connected, where \emph{a,b} ∊ {4,8}, if its foreground is \emph{a}-connected and its background is \emph{b}-connected. We consider a local modification of a BaB_a, WbW_b-connected image \emph{I} in which a black pixel can be interchanged with an adjacent white pixel provided that this preserves the connectivity of both the foreground and the background of \emph{I}. We have shown that for any (\emph{a,b}) ∊ {(4,8),(8,4),(8,8)}, any two BaB_a, WbW_b-connected images \emph{I} and \emph{J} each with n black pixels differ by a sequence of θ(n2)\theta(n^2) interchanges. We have also shown that any two B4B_4, W4W_4-connected images \emph{I} and \emph{J} each with n black pixels differ by a sequence of O(n4n^4) interchanges.Postprint (published version

    Imaging Soft Materials with Scanning Tunneling Microscopy

    Get PDF
    By modifying freeze-fracture replication, a standard electron microscopy fixation technique, for use with the scanning tunneling microscope (STM), a variety of soft, non-conductive biomaterials can be imaged at high resolution in three dimensions. Metal replicas make near ideal samples for STM in comparison to the original biological materials. Modifications include a 0.1 μm backing layer of silver and mounting the replicas on a fine-mesh silver filters to enhance the rigidity of the metal replica. This is required unless STM imaging is carried out in vacuum; otherwise, a liquid film of contamination physically connects the STM tip with the sample. This mechanical coupling leads to exaggerated height measurements; the enhanced rigidity of the thicker replica eliminates much of the height amplification. Further improvement was obtained by imaging in a dry nitrogen atmosphere. Calibration and reproducibility were tested with replicas of well characterized bilayers of cadmium arachidate on mica that provide regular 5.5 nm steps. We have used the STM/replica technique to examine the ripple shape and amplitude in the P/J. phase of dimyristoylpbospbatidyl-choline (DMPC) in water. STM images were analyzed using a cross-correlation averaging program to eliminate the effects of noise and the finite size and shapes of the metal grains that make up the replica. The correlation averaging allowed us to develop a composite ripple profile averaged over hundreds of individual ripples and different samples. The STM/replica technique is sufficiently general that it can be used to examine a variety of hydrated lipid and protein samples at a lateral resolution of about 1 nm and a vertical resolution of about 0.3 run

    Synthetic Aperture Radar Tool and Libraries: A Framework for Geo-Referenced Data Processing and Algorithm Prototyping

    Get PDF
    Creating a system for Synthetic Aperture Radar (SAR) image formation can be a huge undertaking as it requires knowledge of several disparate domains. Researchers may be prevented from applying interesting techniques in a particular domain due to hurdles in working with those areas outside their area of interest. This paper presents the SyntheTic Aperture Radar Tool and Libraries (STARTAL) framework for SAR processing that simplifies adding new data formats and prototyping algorithms. STARTAL provides a user interface for viewing the full data region on ground geometry, selecting sub-regions to process, and viewing processed results. Many common, difficult tasks are provided as libraries for general use. To validate the STARTAL framework, this paper also shows imagery which has been processed with algorithms developed at Utah State University (USU) which are derived from a model-based expression of the relationship between collected SAR data and ground geometry
    • …
    corecore