90,587 research outputs found

    Solar Magnetic Tracking. I. Software Comparison and Recommended Practices

    Full text link
    Feature tracking and recognition are increasingly common tools for data analysis, but are typically implemented on an ad-hoc basis by individual research groups, limiting the usefulness of derived results when selection effects and algorithmic differences are not controlled. Specific results that are affected include the solar magnetic turnover time, the distributions of sizes, strengths, and lifetimes of magnetic features, and the physics of both small scale flux emergence and the small-scale dynamo. In this paper, we present the results of a detailed comparison between four tracking codes applied to a single set of data from SOHO/MDI, describe the interplay between desired tracking behavior and parameterization of tracking algorithms, and make recommendations for feature selection and tracking practice in future work.Comment: In press for Astrophys. J. 200

    Dynamic Zoom Simulations: a fast, adaptive algorithm for simulating lightcones

    Get PDF
    The advent of a new generation of large-scale galaxy surveys is pushing cosmological numerical simulations in an uncharted territory. The simultaneous requirements of high resolution and very large volume pose serious technical challenges, due to their computational and data storage demand. In this paper, we present a novel approach dubbed Dynamic Zoom Simulations -- or DZS -- developed to tackle these issues. Our method is tailored to the production of lightcone outputs from N-body numerical simulations, which allow for a more efficient storage and post-processing compared to standard comoving snapshots, and more directly mimic the format of survey data. In DZS, the resolution of the simulation is dynamically decreased outside the lightcone surface, reducing the computational work load, while simultaneously preserving the accuracy inside the lightcone and the large-scale gravitational field. We show that our approach can achieve virtually identical results to traditional simulations at half of the computational cost for our largest box. We also forecast this speedup to increase up to a factor of 5 for larger and/or higher-resolution simulations. We assess the accuracy of the numerical integration by comparing pairs of identical simulations run with and without DZS. Deviations in the lightcone halo mass function, in the sky-projected lightcone, and in the 3D matter lightcone always remain below 0.1%. In summary, our results indicate that the DZS technique may provide a highly-valuable tool to address the technical challenges that will characterise the next generation of large-scale cosmological simulations.Comment: 17 pages, 13 figures, version accepted for publication in MNRA

    Model-based learning of local image features for unsupervised texture segmentation

    Full text link
    Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images

    Two Approaches for Text Segmentation in Web Images

    Get PDF
    There is a significant need to recognise the text in images on web pages, both for effective indexing and for presentation by non-visual means (e.g., audio). This paper presents and compares two novel methods for the segmentation of characters for subsequent extraction and recognition. The novelty of both approaches is the combination of (different in each case) topological features of characters with an anthropocentric perspective of colour perception— in preference to RGB space analysis. Both approaches enable the extraction of text in complex situations such as in the presence of varying colour and texture (characters and background)

    Two Approaches for Text Segmentation in Web Images

    No full text
    There is a significant need to recognise the text in images on web pages, both for effective indexing and for presentation by non-visual means (e.g., audio). This paper presents and compares two novel methods for the segmentation of characters for subsequent extraction and recognition. The novelty of both approaches is the combination of (different in each case) topological features of characters with an anthropocentric perspective of colour perception— in preference to RGB space analysis. Both approaches enable the extraction of text in complex situations such as in the presence of varying colour and texture (characters and background)
    • …
    corecore