35,438 research outputs found

    Automatic system for quality-based classification of marble textures

    Get PDF
    In this paper, we present an automatic system and algorithms for the classification of marble slabs into different groups in real time in production line, according to slabs quality. The application of the system is aimed at the marble industry, in order to automate and improve the manual classification process of marble slabs carried out at present. The system consists of a mechatronic prototype, which houses all the required physical components for the acquisition of marble slabs images in suitable light conditions, and computational algorithms, which are used to analyze the color texture of the marble surfaces and classify them into their corresponding group. In order to evaluate the color representation influence on the image analysis, four color spaces have been tested: RGB, XYZ, YIQ, and K-L. After the texture analysis performed with the sum and difference histograms algorithm, a feature extraction process has been implemented with principal component analysis. Finally, a multilayer perceptron neural network trained with the backpropagation algorithm with adaptive learning rate is used to classify the marble slabs in three categories, according to their quality. The results (successful classification rate of 98.9%) show very high performance compared with the traditional (manual) system

    Passively mode-locked laser using an entirely centred erbium-doped fiber

    Get PDF
    This paper describes the setup and experimental results for an entirely centred erbium-doped fiber laser with passively mode-locked output. The gain medium of the ring laser cavity configuration comprises a 3 m length of two-core optical fiber, wherein an undoped outer core region of 9.38 μm diameter surrounds a 4.00 μm diameter central core region doped with erbium ions at 400 ppm concentration. The generated stable soliton mode-locking output has a central wavelength of 1533 nm and pulses that yield an average output power of 0.33 mW with a pulse energy of 31.8 pJ. The pulse duration is 0.7 ps and the measured output repetition rate of 10.37 MHz corresponds to a 96.4 ns pulse spacing in the pulse train

    An Interpretable Deep Hierarchical Semantic Convolutional Neural Network for Lung Nodule Malignancy Classification

    Full text link
    While deep learning methods are increasingly being applied to tasks such as computer-aided diagnosis, these models are difficult to interpret, do not incorporate prior domain knowledge, and are often considered as a "black-box." The lack of model interpretability hinders them from being fully understood by target users such as radiologists. In this paper, we present a novel interpretable deep hierarchical semantic convolutional neural network (HSCNN) to predict whether a given pulmonary nodule observed on a computed tomography (CT) scan is malignant. Our network provides two levels of output: 1) low-level radiologist semantic features, and 2) a high-level malignancy prediction score. The low-level semantic outputs quantify the diagnostic features used by radiologists and serve to explain how the model interprets the images in an expert-driven manner. The information from these low-level tasks, along with the representations learned by the convolutional layers, are then combined and used to infer the high-level task of predicting nodule malignancy. This unified architecture is trained by optimizing a global loss function including both low- and high-level tasks, thereby learning all the parameters within a joint framework. Our experimental results using the Lung Image Database Consortium (LIDC) show that the proposed method not only produces interpretable lung cancer predictions but also achieves significantly better results compared to common 3D CNN approaches

    Rotation before Recognition: Orientation-Invariant Identification of Visual Textures with ARTEX 2

    Full text link
    We describe the ARTEX 2 neural network for recognition of visual textures at arbitrary orientations. ARTEX 2 recognizes visual textures by first passing them through a preprocessing stage which rotates them to a canonical orientation. The resulting canonically-oriented visual textures are then classified by a simplified version of the ARTEX texture-recognition algorithm. Several approaches to determining the proper angle for rotation to a canonical orientation are investigated, and their respective performances are compared on a 20-class database of visual textures.Defense Advanced Research Projects Agency and Office of Naval Research (N00014-95-1-0409
    corecore