635 research outputs found

    Empirical assessment of colour symmetries

    Get PDF
    The quality of potential symmetries of the similarity structure of the Basic Colour Terms has been assessed. The assessment was made on the basis of a database of similarity judgements, made by subjects in response to linguistically expressed questions. All potential symmetries can be statistically rejected, although the well-known and some novel interpretable symmetries are shown to be approximately correct

    The second order local-image-structure solid

    Get PDF
    Characterization of second order local image structure by a 6D vector ( or jet) of Gaussian derivative measurements is considered. We consider the affect on jets of a group of transformations - affine intensity-scaling, image rotation and reflection, and their compositions - that preserve intrinsic image structure. We show how this group stratifies the jet space into a system of orbits. Considering individual orbits as points, a 3D orbifold is defined. We propose a norm on jet space which we use to induce a metric on the orbifold. The metric tensor shows that the orbifold is intrinsically curved. To allow visualization of the orbifold and numerical computation with it, we present a mildly-distorting but volume-preserving embedding of it into euclidean 3-space. We call the resulting shape, which is like a flattened lemon, the second order local-image-structure solid. As an example use of the solid, we compute the distribution of local structures in noise and natural images. For noise images, analytical results are possible and they agree with the empirical results. For natural images, an excess of locally 1D structure is found

    Symmetry sensitivities of Derivative-of-Gaussian filters

    Get PDF
    We consider the measurement of image structure using linear filters, in particular derivative-of-Gaussian (DtG) filters, which are an important model of V1 simple cells and widely used in computer vision, and whether such measurements can determine local image symmetry. We show that even a single linear filter can be sensitive to a symmetry, in the sense that specific responses of the filter can rule it out. We state and prove a necessary and sufficient, readily computable, criterion for filter symmetry-sensitivity. We use it to show that the six filters in a second order DtG family have patterns of joint sensitivity which are distinct for 12 different classes of symmetry. This rich symmetry-sensitivity adds to the properties that make DtG filters well-suited for probing local image structure, and provides a set of landmark responses suitable to be the foundation of a nonarbitrary system of feature categories

    Using basic image features for texture classification

    Get PDF
    Representing texture images statistically as histograms over a discrete vocabulary of local features has proven widely effective for texture classification tasks. Images are described locally by vectors of, for example, responses to some filter bank; and a visual vocabulary is defined as a partition of this descriptor-response space, typically based on clustering. In this paper, we investigate the performance of an approach which represents textures as histograms over a visual vocabulary which is defined geometrically, based on the Basic Image Features of Griffin and Lillholm (Proc. SPIE 6492(09):1-11, 2007), rather than by clustering. BIFs provide a natural mathematical quantisation of a filter-response space into qualitatively distinct types of local image structure. We also extend our approach to deal with intra-class variations in scale. Our algorithm is simple: there is no need for a pre-training step to learn a visual dictionary, as in methods based on clustering, and no tuning of parameters is required to deal with different datasets. We have tested our implementation on three popular and challenging texture datasets and find that it produces consistently good classification results on each, including what we believe to be the best reported for the KTH-TIPS and equal best reported for the UIUCTex databases

    The Atlas Structure of Images

    Get PDF
    Many operations of vision require image regions to be isolated and inter-related. This is challenging when they are different in detail and extent. Practical methods of Computer Vision approach this through the tools of downsampling, pyramids, cropping and patches. In this paper we develop an ideal geometric structure for this, compatible with the existing scale space model of image measurement. Its elements are apertures which view the image like fuzzy-edged portholes of frosted glass. We establish containment and cause/effect relations between apertures, and show that these link them into cross-scale atlases. Atlases formed of Gaussian apertures are shown to be a continuous version of the image pyramid used in Computer Vision, and allow various types of image description to naturally be expressed within their framework. We show that views through Gaussian apertures are approximately equivalent to the jets of derivative of Gaussian filter responses that form part of standard Scale Space theory. This supports a view of the simple cells of mammalian V1 as implementing a system of local views of the retinal image of varying extent and resolution. As a worked example we develop a keypoint descriptor scheme that outperforms previous schemes that do not make use of learning

    Limits on transfer learning from photographic image data to X-ray threat detection

    Get PDF
    BACKGROUND: X-ray imaging is a crucial and ubiquitous tool for detecting threats to transport security, but interpretation of the images presents a logistical bottleneck. Recent advances in Deep Learning image classification offer hope of improving throughput through automation. However, Deep Learning methods require large quantities of labelled training data. While photographic data is cheap and plentiful, comparable training sets are seldom available for the X-ray domain. OBJECTIVE: To determine whether and to what extent it is feasible to exploit the availability of photo data to supplement the training of X-ray threat detectors. METHODS: A new dataset was collected, consisting of 1901 matched pairs of photo & X-ray images of 501 common objects. Of these, 258 pairs were of 69 objects considered threats in the context of aviation. This data was used to test a variety of transfer learning approaches. A simple model of threat cue availability was developed to understand the limits of this transferability. RESULTS: Appearance features learned from photos provide a useful basis for training classifiers. Some transfer from the photo to the X-ray domain is possible as ∼40% of danger cues are shared between the modalities, but the effectiveness of this transfer is limited since ∼60% of cues are not. CONCLUSIONS: Transfer learning is beneficial when X-ray data is very scarce—of the order of tens of training images in our experiments—but provides no significant benefit when hundreds or thousands of X-ray images are available

    Identifying Human Strategies for Generating Word-Level Adversarial Examples

    Get PDF
    Adversarial examples in NLP are receiving increasing research attention. One line of investigation is the generation of word-level adversarial examples against fine-tuned Transformer models that preserve naturalness and grammaticality. Previous work found that human- and machine-generated adversarial examples are comparable in their naturalness and grammatical correctness. Most notably, humans were able to generate adversarial examples much more effortlessly than automated attacks. In this paper, we provide a detailed analysis of exactly how humans create these adversarial examples. By exploring the behavioural patterns of human workers during the generation process, we identify statistically significant tendencies based on which words humans prefer to select for adversarial replacement (e.g., word frequencies, word saliencies, sentiment) as well as where and when words are replaced in an input sequence. With our findings, we seek to inspire efforts that harness human strategies for more robust NLP models

    Brittle Features May Help Anomaly Detection

    Get PDF
    One-class anomaly detection is challenging. A representation that clearly distinguishes anomalies from normal data is ideal, but arriving at this representation is difficult since only normal data is available at training time. We examine the performance of representations, transferred from auxiliary tasks, for anomaly detection. Our results suggest that the choice of representation is more important than the anomaly detector used with these representations, although knowledge distillation can work better than using the representations directly. In addition, separability between anomalies and normal data is important but not the sole factor for a good representation, as anomaly detection performance is also correlated with more adversarially brittle features in the representation space. Finally, we show our configuration can detect 96.4% of anomalies in a genuine X-ray security dataset, outperforming previous results

    Segmentation of phase contrast microscopy images based on multi-scale local Basic Image Features histograms

    Get PDF
    Phase contrast microscopy (PCM) is routinely used for the inspection of adherent cell cultures in all fields of biology and biomedicine. Key decisions for experimental protocols are often taken by an operator based on typically qualitative observations. However, automated processing and analysis of PCM images remain challenging due to the low contrast between foreground objects (cells) and background as well as various imaging artefacts. We propose a trainable pixel-wise segmentation approach whereby image structures and symmetries are encoded in the form of multi-scale Basic Image Features local histograms, and classification of them is learned by random decision trees. This approach was validated for segmentation of cell versus background, and discrimination between two different cell types. Performance close to that of state-of-the-art specialised algorithms was achieved despite the general nature of the method. The low processing time ( < 4 s per 1280 × 960 pixel images) is suitable for batch processing of experimental data as well as for interactive segmentation applications

    Machine Learning Based Localization and Classification with Atomic Magnetometers

    Get PDF
    We demonstrate identification of position, material, orientation, and shape of objects imaged by a ⁸⁵Rb atomic magnetometer performing electromagnetic induction imaging supported by machine learning. Machine learning maximizes the information extracted from the images created by the magnetometer, demonstrating the use of hidden data. Localization 2.6 times better than the spatial resolution of the imaging system and successful classification up to 97% are obtained. This circumvents the need of solving the inverse problem and demonstrates the extension of machine learning to diffusive systems, such as low-frequency electrodynamics in media. Automated collection of task-relevant information from quantum-based electromagnetic imaging will have a relevant impact from biomedicine to security
    corecore