285,245 research outputs found

    Symmetry-guided nonrigid registration: the case for distortion correction in multidimensional photoemission spectroscopy

    Full text link
    Image symmetrization is an effective strategy to correct symmetry distortion in experimental data for which symmetry is essential in the subsequent analysis. In the process, a coordinate transform, the symmetrization transform, is required to undo the distortion. The transform may be determined by image registration (i.e. alignment) with symmetry constraints imposed in the registration target and in the iterative parameter tuning, which we call symmetry-guided registration. An example use case of image symmetrization is found in electronic band structure mapping by multidimensional photoemission spectroscopy, which employs a 3D time-of-flight detector to measure electrons sorted into the momentum (kxk_x, kyk_y) and energy (EE) coordinates. In reality, imperfect instrument design, sample geometry and experimental settings cause distortion of the photoelectron trajectories and, therefore, the symmetry in the measured band structure, which hinders the full understanding and use of the volumetric datasets. We demonstrate that symmetry-guided registration can correct the symmetry distortion in the momentum-resolved photoemission patterns. Using proposed symmetry metrics, we show quantitatively that the iterative approach to symmetrization outperforms its non-iterative counterpart in the restored symmetry of the outcome while preserving the average shape of the photoemission pattern. Our approach is generalizable to distortion corrections in different types of symmetries and should also find applications in other experimental methods that produce images with similar features

    Noise-robust method for image segmentation

    Get PDF
    Segmentation of noisy images is one of the most challenging problems in image analysis and any improvement of segmentation methods can highly influence the performance of many image processing applications. In automated image segmentation, the fuzzy c-means (FCM) clustering has been widely used because of its ability to model uncertainty within the data, applicability to multi-modal data and fairly robust behaviour. However, the standard FCM algorithm does not consider any information about the spatial linage context and is highly sensitive to noise and other imaging artefacts. Considering above mentioned problems, we developed a new FCM-based approach for the noise-robust fuzzy clustering and we present it in this paper. In this new iterative algorithm we incorporated both spatial and feature space information into the similarity measure and the membership function. We considered that spatial information depends on the relative location and features of the neighbouring pixels. The performance of the proposed algorithm is tested on synthetic image with different noise levels and real images. Experimental quantitative and qualitative segmentation results show that our method efficiently preserves the homogeneity of the regions and is more robust to noise than other FCM-based methods

    Supervised cross-modal factor analysis for multiple modal data classification

    Full text link
    In this paper we study the problem of learning from multiple modal data for purpose of document classification. In this problem, each document is composed two different modals of data, i.e., an image and a text. Cross-modal factor analysis (CFA) has been proposed to project the two different modals of data to a shared data space, so that the classification of a image or a text can be performed directly in this space. A disadvantage of CFA is that it has ignored the supervision information. In this paper, we improve CFA by incorporating the supervision information to represent and classify both image and text modals of documents. We project both image and text data to a shared data space by factor analysis, and then train a class label predictor in the shared space to use the class label information. The factor analysis parameter and the predictor parameter are learned jointly by solving one single objective function. With this objective function, we minimize the distance between the projections of image and text of the same document, and the classification error of the projection measured by hinge loss function. The objective function is optimized by an alternate optimization strategy in an iterative algorithm. Experiments in two different multiple modal document data sets show the advantage of the proposed algorithm over other CFA methods
    • …
    corecore