281 research outputs found

    Improving the Sharpness of Digital Images Using a Modified Laplacian Sharpening Technique

    Get PDF
    Many imaging systems produce images with deficient sharpness due to different real limitations. Hence, various image sharpening techniques have been used to improve the acutance of digital images. One of such is the well-known Laplacian sharpening technique. When implementing the basic Laplacian technique for image sharpening, two main drawbacks were detected. First, the amount of introduced sharpness cannot be increased or decreased. Second, in many situations, the resulted image suffers from a noticeable increase in brightness around the sharpened edges. In this article, an improved version of the basic Laplacian technique is proposed, wherein it contains two key modifications of weighting the Laplace operator to control the introduced sharpness and tweaking the second order derivatives to provide adequate brightness for recovered edges. To perform reliable experiments, only real-degraded images were used, and their accuracies were measured using a specialized no-reference image quality assessment metric. From the obtained experimental results, it is evident that the proposed technique outperformed the comparable techniques in terms of recorded accuracy and visual appearance

    Comparing Adobe’s Unsharp Masks and High-Pass Filters in Photoshop Using the Visual Information Fidelity Metric

    Get PDF
    The present study examines image sharpening techniques quantitatively. A technique known as unsharp masking has been the preferred image sharpening technique for imaging professionals for many years. More recently, another professional-level sharpening solution has been introduced, namely, the high-pass filter technique of image sharpening. An extensive review of the literature revealed no purely quantitative studies that compared these techniques. The present research compares unsharp masking (USM) and high-pass filter (HPF) sharpening using an image quality metric known as Visual Information Fidelity (VIF). Prior researchers have used VIF data in research aimed at improving the USM sharpening technique. The present study aims to add to this branch of the literature through the comparison of the USM and the HPF sharpening techniques. The objective of the present research is to determine which sharpening technique, USM or HPF, yields the highest VIF scores for two categories of images, macro images and architectural images. Each set of images was further analyzed to compare the VIF scores of subjects with high and low severity depth of field defects. Finally, the researcher proposed rules for choosing USM and HPF parameters that resulted in optimal VIF scores. For each category, the researcher captured 24 images (12 with high severity defects and 12 with low severity defects). Each image was sharpened using an iterative process of choosing USM and HPF sharpening parameters, applying sharpening filters with the chosen parameters, and assessing the resulting images using the VIF metric. The process was repeated until the VIF scores could no longer be improved. The highest USM and HPF VIF scores for each image were compared using a paired t-test for statistical significance. The t-test results demonstrated that: • The USM VIF scores for macro images (M = 1.86, SD = 0.59) outperformed those for HPF (M = 1.34, SD = 0.18), a statistically significant mean increase of 0.52, t = 5.57 (23), p = 0.0000115. Similar results were obtained for both the high severity and low severity subsets of macro images. • The USM VIF scores for architectural images (M = 1.40, SD = 0.24) outperformed those for HPF (M = 1.26, SD = 0.15), a statistically significant mean increase of 0.14, t = 5.21 (23), p = 0.0000276. Similar results were obtained for both the high severity and low severity subsets of architectural images. The researcher found that the optimal sharpening parameters for USM and HPF depend on the content of the image. The optimal choice of parameters for USM depends on whether the most important features are edges or objects. Specific rules for choosing USM parameters were developed for each class of images. HPF is simpler in the fact that it only uses one parameter, Radius. Specific rules for choosing the HPF Radius were also developed for each class of images. Based on these results, the researcher concluded that USM outperformed HPF in sharpening macro and architectural images. The superior performance of USM could be due to the fact that it provides more parameters for users to control the sharpening process than HPF

    Advances in Image Processing, Analysis and Recognition Technology

    Get PDF
    For many decades, researchers have been trying to make computers’ analysis of images as effective as the system of human vision is. For this purpose, many algorithms and systems have previously been created. The whole process covers various stages, including image processing, representation and recognition. The results of this work can be applied to many computer-assisted areas of everyday life. They improve particular activities and provide handy tools, which are sometimes only for entertainment, but quite often, they significantly increase our safety. In fact, the practical implementation of image processing algorithms is particularly wide. Moreover, the rapid growth of computational complexity and computer efficiency has allowed for the development of more sophisticated and effective algorithms and tools. Although significant progress has been made so far, many issues still remain, resulting in the need for the development of novel approaches

    Monte Carlo Method with Heuristic Adjustment for Irregularly Shaped Food Product Volume Measurement

    Get PDF
    Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method

    Proceedings, MSVSCC 2014

    Get PDF
    Proceedings of the 8th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 17, 2014 at VMASC in Suffolk, Virginia

    Deep Model for Improved Operator Function State Assessment

    Get PDF
    A deep learning framework is presented for engagement assessment using EEG signals. Deep learning is a recently developed machine learning technique and has been applied to many applications. In this paper, we proposed a deep learning strategy for operator function state (OFS) assessment. Fifteen pilots participated in a flight simulation from Seattle to Chicago. During the four-hour simulation, EEG signals were recorded for each pilot. We labeled 20- minute data as engaged and disengaged to fine-tune the deep network and utilized the remaining vast amount of unlabeled data to initialize the network. The trained deep network was then used to assess if a pilot was engaged during the four-hour simulation

    A review of technical factors to consider when designing neural networks for semantic segmentation of Earth Observation imagery

    Full text link
    Semantic segmentation (classification) of Earth Observation imagery is a crucial task in remote sensing. This paper presents a comprehensive review of technical factors to consider when designing neural networks for this purpose. The review focuses on Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), and transformer models, discussing prominent design patterns for these ANN families and their implications for semantic segmentation. Common pre-processing techniques for ensuring optimal data preparation are also covered. These include methods for image normalization and chipping, as well as strategies for addressing data imbalance in training samples, and techniques for overcoming limited data, including augmentation techniques, transfer learning, and domain adaptation. By encompassing both the technical aspects of neural network design and the data-related considerations, this review provides researchers and practitioners with a comprehensive and up-to-date understanding of the factors involved in designing effective neural networks for semantic segmentation of Earth Observation imagery.Comment: 145 pages with 32 figure
    • …
    corecore