925 research outputs found

    A Review on Automatic Color Form Dropout for Document Processing

    Get PDF
    Color Dropout converts documents like color forms to black and white images by deleting the specific color which is maintained only the information entered in the form. The successful color dropout simplifies the task of extracting textual information from the image for the reader. The color dropout filter parameters include the color values of the non-dropout colors, color space conversion, distance calculation, dropout threshold detection. Color dropout is done by converting pixels that have color within the tolerance sphere of the non-dropout colors to black and all others to white in RGB or a Luminance-Chrominance space. This approach uses an ideal FPGA platform which lends itself to high-speed hardware implementation with low memory requirements,. This is done using VHDL coding. The color space transformation from RGB to YCbCr involves a matrix multiplication and the dropout filter implementation is similar in both cases. Color dropout processing result may be either represented in RGB or YCbCr. DOI: 10.17762/ijritcc2321-8169.15011

    Hyperspectral image enhancement and mixture deep-learning classification of corneal epithelium injuries.

    Get PDF
    In our preliminary study, the reflectance signatures obtained from hyperspectral imaging (HSI) of normal and abnormal corneal epithelium tissues of porcine show similar morphology with subtle differences. Here we present image enhancement algorithms that can be used to improve the interpretability of data into clinically relevant information to facilitate diagnostics. A total of 25 corneal epithelium images without the application of eye staining were used. Three image feature extraction approaches were applied for image classification: (i) image feature classification from histogram using a support vector machine with a Gaussian radial basis function (SVM-GRBF); (ii) physical image feature classification using deep-learning Convolutional Neural Networks (CNNs) only; and (iii) the combined classification of CNNs and SVM-Linear. The performance results indicate that our chosen image features from the histogram and length-scale parameter were able to classify with up to 100% accuracy; particularly, at CNNs and CNNs-SVM, by employing 80% of the data sample for training and 20% for testing. Thus, in the assessment of corneal epithelium injuries, HSI has high potential as a method that could surpass current technologies regarding speed, objectivity, and reliability

    ANN-MIND : dropout for neural network training with missing data

    Get PDF
    M.Sc. (Computer Science)Abstract: It is a well-known fact that the quality of the dataset plays a central role in the results and conclusions drawn from the analysis of such a dataset. As the saying goes, ”garbage in, garbage out”. In recent years, neural networks have displayed good performance in solving a diverse number of problems. Unfortunately, neural networks are not immune to this misfortune presented by missing values. Furthermore, in most real-world settings, it is often the case that, the only data available for training neural networks consists of missing values. In such cases, we are left with little choice but to use this data for the purposes of training neural networks, although doing so may result in a poorly trained neural network. Most systems currently in use- merely discard the missing observation from the training datasets, while others just proceed to use this data and ignore the problems presented by the missing values. Still other approaches choose to impute these missing values with fixed constants such as means and mode. Most neural network models work under the assumption that the supplied data contains no missing values. This dissertation explores a method for training neural networks in the event where the training dataset consists of missing values..

    Dynamic noise filtering for multi-class classification of beehive audio data

    Get PDF
    Honeybees are the most specialized insect pollinators and are critical not only for honey production but, also, for keeping the environmental balance by pollinating the flowers of a wide variety of crops.Recording and analyzing bee sounds became a fundamental part of recent initiatives in the development of so-called smart hives. The majority of researches on beehive sound analytics are focusing on swarming detection, a relatively simple binary classification task (due to the obvious difference in the sound of a swarming and a non-swarming bee colony) where machine learning models achieve good performance even when trained on small data.However, in the case of more complex tasks of beehive sound analytics, even modern machine learning approaches perform poorly. First, training such models would need a large dataset but, according to our knowledge, there is no publicly available large-scale beehive audio data. Second, due to the specifics of beehive sounds, efficient noise filtering methods would be required, however, we could not find a noise filtering method that would increase the performance of machine learning models substantially.In this paper, we propose a dynamic noise filtering method applicable on spectrograms (image representations of audio data) which is superior to the most popular image noise filtering baselines. Further, we introduce a multi-class classification task of bee sounds and a large-scale dataset consisting of 10.000 beehive audio recordings. Finally, we provide the results of a large-scale experiment involving various combinations of audio feature extraction and noise filtering methods together with various deep learning models. We believe that the contributions of this paper will facilitate further research in the area of (beehive) sound analytics

    Target Tracking in Blind Range of Radars With Deep Learning

    Get PDF
    Surveillance radars form the first line of defense in border areas. But due to highly uneven terrains, there are pockets of vulnerability for the enemy to move undetected till they are in the blind range of the radar. This class of targets are termed the 'pop up' targets. They pose a serious threat as they can inflict severe damage to life and property. Blind ranges occur by way of design in pulsed radars. To minimize the blind range problem, multistatic radar configuration or dual pulse transmission methods were proposed. Multistatic radar configuration is highly hardware intensive and dual pulse transmission could only reduce the blind range, not eliminate it. In this work we propose, elimination of blind range using deep learning based video tracking for mono static surveillance radars. Since radars operate in deploy and forget mode, visual system must also operate in a similar way for added advantage. Deep Learning paved way for automatic target detection and classification. However, a deep learning architecture is inherently not capable of tracking because of frame to frame independence in processing. To overcome this limitation, we use prior information from past detections to establish frame to frame correlation and predict future positions of target using a method inspired from CFAR in a parallel channel for target tracking. © 2020 Warsaw University of Technology

    Discovery of Materials Through Applied Machine Learning

    Get PDF
    Advances in artificial intelligence technology, specifically machine learning, have cre- ated opportunities in the material sciences to accelerate material discovery and gain fundamental understanding of the interaction between certain the constituent ele- ments of a material and the properties expressed by that material. Application of machine learning to experimental materials discovery is slow due to the monetary and temporal cost of experimental data, but parallel techniques such as continuous com- positional gradients or high-throughput characterization setups are capable of gener- ating larger amounts of data than the typical experimental process, and therefore are suitable for combination with machine learning. A random forest machine learning algorithm has been applied to two different materials discovery challenges - discovery of new metallic glass forming ternary compositions and discovery of novel ammonia decomposition catalysts - and has led to accelerated discovery of high-performing materials

    Advanced Techniques for Ground Penetrating Radar Imaging

    Get PDF
    Ground penetrating radar (GPR) has become one of the key technologies in subsurface sensing and, in general, in non-destructive testing (NDT), since it is able to detect both metallic and nonmetallic targets. GPR for NDT has been successfully introduced in a wide range of sectors, such as mining and geology, glaciology, civil engineering and civil works, archaeology, and security and defense. In recent decades, improvements in georeferencing and positioning systems have enabled the introduction of synthetic aperture radar (SAR) techniques in GPR systems, yielding GPR–SAR systems capable of providing high-resolution microwave images. In parallel, the radiofrequency front-end of GPR systems has been optimized in terms of compactness (e.g., smaller Tx/Rx antennas) and cost. These advances, combined with improvements in autonomous platforms, such as unmanned terrestrial and aerial vehicles, have fostered new fields of application for GPR, where fast and reliable detection capabilities are demanded. In addition, processing techniques have been improved, taking advantage of the research conducted in related fields like inverse scattering and imaging. As a result, novel and robust algorithms have been developed for clutter reduction, automatic target recognition, and efficient processing of large sets of measurements to enable real-time imaging, among others. This Special Issue provides an overview of the state of the art in GPR imaging, focusing on the latest advances from both hardware and software perspectives
    corecore