1,572 research outputs found

    Rapid Visual Categorization is not Guided by Early Salience-Based Selection

    Full text link
    The current dominant visual processing paradigm in both human and machine research is the feedforward, layered hierarchy of neural-like processing elements. Within this paradigm, visual saliency is seen by many to have a specific role, namely that of early selection. Early selection is thought to enable very fast visual performance by limiting processing to only the most salient candidate portions of an image. This strategy has led to a plethora of saliency algorithms that have indeed improved processing time efficiency in machine algorithms, which in turn have strengthened the suggestion that human vision also employs a similar early selection strategy. However, at least one set of critical tests of this idea has never been performed with respect to the role of early selection in human vision. How would the best of the current saliency models perform on the stimuli used by experimentalists who first provided evidence for this visual processing paradigm? Would the algorithms really provide correct candidate sub-images to enable fast categorization on those same images? Do humans really need this early selection for their impressive performance? Here, we report on a new series of tests of these questions whose results suggest that it is quite unlikely that such an early selection process has any role in human rapid visual categorization.Comment: 22 pages, 9 figure

    Dynamic relevance: vision-based focus of attention using artificial neural networks

    Get PDF
    AbstractThis paper presents a method for ascertaining the relevance of inputs in vision-based tasks by exploiting temporal coherence and predictability. In contrast to the tasks explored in many previous relevance experiments, the class of tasks examined in this study is one in which relevance is a time-varying function of the previous and current inputs. The method proposed in this paper dynamically allocates relevance to inputs by using expectations of their future values. As a model of the task is learned, the model is simultaneously extended to create task-specific predictions of the future values of inputs. Inputs that are not relevant, and therefore not accounted for in the model, will not be predicted accurately. These inputs can be de-emphasized, and, in turn, a new, improved, model of the task created. The techniques presented in this paper have been successfully applied to the vision-based autonomous control of a land vehicle, vision-based hand tracking in cluttered scenes, and the detection of faults in the plasma-etch step of semiconductor wafers

    Explainable AI Algorithms for Vibration Data-based Fault Detection: Use Case-adadpted Methods and Critical Evaluation

    Full text link
    Analyzing vibration data using deep neural network algorithms is an effective way to detect damages in rotating machinery at an early stage. However, the black-box approach of these methods often does not provide a satisfactory solution because the cause of classifications is not comprehensible to humans. Therefore, this work investigates the application of explainable AI (XAI) algorithms to convolutional neural networks for vibration-based condition monitoring. For this, various XAI algorithms are applied to classifications based on the Fourier transform as well as the order analysis of the vibration signal. The results are visualized as a function of the revolutions per minute (RPM), in the shape of frequency-RPM maps and order-RPM maps. This allows to assess the saliency given to features which depend on the rotation speed and those with constant frequency. To compare the explanatory power of the XAI methods, investigations are first carried out with a synthetic data set with known class-specific characteristics. Then a real-world data set for vibration-based imbalance classification on an electric motor, which runs at a broad range of rotation speeds, is used. A special focus is put on the consistency for variable periodicity of the data, which translates to a varying rotation speed of a real-world machine. This work aims to show the different strengths and weaknesses of the methods for this use case: GradCAM, LRP and LIME with a new perturbation strategy
    • ā€¦
    corecore