39 research outputs found

    Focusing on out-of-focus : assessing defocus estimation algorithms for the benefit of automated image masking

    Get PDF
    Acquiring photographs as input for an image-based modelling pipeline is less trivial than often assumed. Photographs should be correctly exposed, cover the subject sufficiently from all possible angles, have the required spatial resolution, be devoid of any motion blur, exhibit accurate focus and feature an adequate depth of field. The last four characteristics all determine the " sharpness " of an image and the photogrammetric, computer vision and hybrid photogrammetric computer vision communities all assume that the object to be modelled is depicted " acceptably " sharp throughout the whole image collection. Although none of these three fields has ever properly quantified " acceptably sharp " , it is more or less standard practice to mask those image portions that appear to be unsharp due to the limited depth of field around the plane of focus (whether this means blurry object parts or completely out-of-focus backgrounds). This paper will assess how well-or ill-suited defocus estimating algorithms are for automatically masking a series of photographs, since this could speed up modelling pipelines with many hundreds or thousands of photographs. To that end, the paper uses five different real-world datasets and compares the output of three state-of-the-art edge-based defocus estimators. Afterwards, critical comments and plans for the future finalise this paper

    Image Quality Modeling and Optimization for Non-Conventional Aperture Imaging Systems

    Get PDF
    The majority of image quality studies have been performed on systems with conventional aperture functions. These systems have straightforward aperture designs and well-understood behavior. Image quality for these systems can be predicted by the General Image Quality Equation (GIQE). However, in order to continue pushing the boundaries of imaging, more control over the point spread function of an imaging system may be necessary. This requires modifications in the pupil plane of a system, causing a departure from the realm of most image quality studies. Examples include sparse apertures, synthetic apertures, coded apertures and phase elements. This work will focus on sparse aperture telescopes and the image quality issues associated with them, however, the methods presented will be applicable to other non-conventional aperture systems. \\ In this research, an approach for modeling the image quality of non-conventional aperture systems will be introduced. While the modeling approach is based in previous work, a novel validation study will be performed, which accounts for the effects of both broadband illumination and wavefront error. One of the key image quality challenges for sparse apertures is post-processing ringing artifacts. These artifacts have been observed in modeled data, but a validation study will be performed to observe them in measured data and to compare them to model predictions. Once validated, the modeling approach will be used to perform a small set of design studies for sparse aperture systems, including spectral bandpass selection and aperture layout optimization

    An Algorithm on Generalized Un Sharp Masking for Sharpness and Contrast of an Exploratory Data Model

    Full text link
    In the applications like medical radiography enhancing movie features and observing the planets it is necessary to enhance the contrast and sharpness of an image. The model proposes a generalized unsharp masking algorithm using the exploratory data model as a unified framework. The proposed algorithm is designed as to solve simultaneously enhancing contrast and sharpness by means of individual treatment of the model component and the residual, reducing the halo effect by means of an edge-preserving filter, solving the out of range problem by means of log ratio and tangent operations. Here is a new system called the tangent system which is based upon a specific bargeman divergence. Experimental results show that the proposed algorithm is able to significantly improve the contrast and sharpness of an image. Using this algorithm user can adjust the two parameters the contrast and sharpness to have desired output

    Data harmonization in PET imaging

    Get PDF
    Medical imaging physics has advanced a lot in recent years, providing clinicians and researchers with increasingly detailed images that are well suited to be analyzed with a quantitative approach typical of hard sciences, based on measurements and analysis of clinical interest quantities extracted from images themselves. Such an approach is placed in the context of quantitative imaging. The possibility of sharing data quickly, the development of machine learning and data mining techniques, the increasing availability of computational power and digital data storage which characterize this age constitute a great opportunity for quantitative imaging studies. The interest in large multicentric databases that gather images from single research centers is growing year after year. Big datasets offer very interesting research perspectives, primarily because they allow to increase statistical power of studies. At the same time, they raised a compatibility issue between data themselves. Indeed images acquired with different scanners and protocols could be very different about quality and measures extracted from images with different quality might be not compatible with each other. Harmonization techniques have been developed to circumvent this problem. Harmonization refers to all efforts to combine data from different sources and provide users with a comparable view of data from different studies. Harmonization can be done before acquiring data, by choosing a-priori appropriate acquisition protocols through a preliminary joint effort between research centers, or it can be done a-posteriori i.e. images are grouped into a single dataset and then any effects on measures caused by technical acquisition factors are removed. Although the a-priori harmonization guarantees best results, it is not often used for practical and/or technical reasons. In this thesis I will focus on a-posteriori harmonization. It is important to note that when we consider multicentric studies, in addition to the technical variability related to scanners and acquisition protocols, there may be a demographic variability that makes single centers samples not statistically equivalent to each other. The wide individual variability that characterize human beings, even more pronounced when patients are enrolled from very different geographical areas, can certainly exacerbate this issue. In addition, we must consider that biological processes are complex phenomena: quantitative imaging measures can be affected by numerous confounding demographic variables even apparently unrelated to measures themselves. A good harmonization method should be able to preserve inter-individual variability and remove at the same time all the effects due acquisition technical factors. Heterogene ity in acquisition together with a great inter-individual variability make harmonization very hard to achieve. Harmonization methods currently used in literature are able to preserve only the inter-subjects variability described by a set of known confounding variables, while all the unknown confounding variables are wrongly removed. This might lead to incorrect harmonization, especially if the unknown confounders play an important role. This issue is emphasized in practice, as sometimes happens that demographic variables that are known to play a major role are unknown. The final goal of my thesis is a proposal for an harmonization method developed in the context of amyloid Positron Emission Tomography (PET) which aim to remove the effects of variability induced by technical factors and at the same time are able to keep all the inter-individual differences. Since knowing all the demographic confounders is almost impossible, both practically and a theoretically, my proposal does not require the knowledge of these variables. The main point is to characterize image quality through a set of quality measures evaluated in regions of interest (ROIs) which are required to be as independent as possible from anatomical and clinical variability in order to exclusively highlight the effect of technical factors on images texture. Ideally, this allows to decouple the between-subjects variability from the technical ones: the latter can be directly removed while the former is automatically preserved. Specifically, I defined and validated 3 quality measures based on images texture properties. In addition I used a quality metric already existing, and I considered the reconstruction matrix dimension to take into account image resolution. My work has been performed using a multicentric dataset consisting of 1001 amyloid PET images. Before dealing specifically with harmonization, I handled some important issues: I built a relational database to organize and manage data and then I developed an automated algorithm for images pre-processing to achieve registration and quantification. This work might also be used in other imaging contexts: in particular I believe it could be applied in fluorodeoxyglucose (FDG) PET and tau PET. The consequences of harmonization I developed have been explored at a preliminary level. My proposal should be considered as a starting point as I mainly dealt with the issues of quality measures, while the harmonization of the variables in itself was done with a linear regression model. Although harmonization through linear models is often used, more sophisticated techniques are present in literature. It would be interesting to combine them with my work. Further investigations would be desirable in future

    The effect of scene content on image quality

    Get PDF
    Device-dependent metrics attempt to predict image quality from an ‘average signal’, usually embodied on test targets. Consequently, the metrics perform well on individual ‘average looking’ scenes and test targets, but provide lower correlation with subjective assessments when working with a variety of scenes with different than ‘average signal’ characteristics. This study considers the issues of scene dependency on image quality. This study aims to quantify the change in quality with scene contents, to research the problem of scene dependency in relation to devicedependent image quality metrics and to provide a solution to it. A novel subjective scaling method was developed in order to derive individual attribute scales, using the results from the overall image quality assessments. This was an analytical top-down approach, which does not require separate scaling of individual attributes and does not assume that the attribute is not independent from other attributes. From the measurements, interval scales were created and the effective scene dependency factor was calculated, for each attribute. Two device-dependent image quality metrics, the Effective Pictorial Information Capacity (EPIC) and the Perceived Information Capacity (PIC), were used to predict subjective image quality for a test set that varied in sharpness and noisiness. These metrics were found to be reliable predictors of image quality. However, they were not equally successful in predicting quality for different images with varying scene content. Objective scene classification was thus considered and employed in order to deal with the problem of scene dependency in device-dependent metrics. It used objective scene descriptors, which correlated with subjective criteria on scene susceptibility. This process resulted in the development of a fully automatic classification of scenes into ‘standard’ and ‘non-standard’ groups, and the result allows the calculation of calibrated metric values for each group. The classification and metric calibration performance was quite encouraging, not only because it improved mean image quality predictions from all scenes, but also because it catered for nonstandard scenes, which originally produced low correlations. The findings indicate that the proposed automatic scene classification method has great potential for tackling the problem of scene dependency, when modelling device-dependent image quality. In addition, possible further studies of objective scene classification are discussed

    Model-Based Environmental Visual Perception for Humanoid Robots

    Get PDF
    The visual perception of a robot should answer two fundamental questions: What? and Where? In order to properly and efficiently reply to these questions, it is essential to establish a bidirectional coupling between the external stimuli and the internal representations. This coupling links the physical world with the inner abstraction models by sensor transformation, recognition, matching and optimization algorithms. The objective of this PhD is to establish this sensor-model coupling

    Scene-Dependency of Spatial Image Quality Metrics

    Get PDF
    This thesis is concerned with the measurement of spatial imaging performance and the modelling of spatial image quality in digital capturing systems. Spatial imaging performance and image quality relate to the objective and subjective reproduction of luminance contrast signals by the system, respectively; they are critical to overall perceived image quality. The Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) describe the signal (contrast) transfer and noise characteristics of a system, respectively, with respect to spatial frequency. They are both, strictly speaking, only applicable to linear systems since they are founded upon linear system theory. Many contemporary capture systems use adaptive image signal processing, such as denoising and sharpening, to optimise output image quality. These non-linear processes change their behaviour according to characteristics of the input signal (i.e. the scene being captured). This behaviour renders system performance “scene-dependent” and difficult to measure accurately. The MTF and NPS are traditionally measured from test charts containing suitable predefined signals (e.g. edges, sinusoidal exposures, noise or uniform luminance patches). These signals trigger adaptive processes at uncharacteristic levels since they are unrepresentative of natural scene content. Thus, for systems using adaptive processes, the resultant MTFs and NPSs are not representative of performance “in the field” (i.e. capturing real scenes). Spatial image quality metrics for capturing systems aim to predict the relationship between MTF and NPS measurements and subjective ratings of image quality. They cascade both measures with contrast sensitivity functions that describe human visual sensitivity with respect to spatial frequency. The most recent metrics designed for adaptive systems use MTFs measured using the dead leaves test chart that is more representative of natural scene content than the abovementioned test charts. This marks a step toward modelling image quality with respect to real scene signals. This thesis presents novel scene-and-process-dependent MTFs (SPD-MTF) and NPSs (SPDNPS). They are measured from imaged pictorial scene (or dead leaves target) signals to account for system scene-dependency. Further, a number of spatial image quality metrics are revised to account for capture system and visual scene-dependency. Their MTF and NPS parameters were substituted for SPD-MTFs and SPD-NPSs. Likewise, their standard visual functions were substituted for contextual detection (cCSF) or discrimination (cVPF) functions. In addition, two novel spatial image quality metrics are presented (the log Noise Equivalent Quanta (NEQ) and Visual log NEQ) that implement SPD-MTFs and SPD-NPSs. The metrics, SPD-MTFs and SPD-NPSs were validated by analysing measurements from simulated image capture pipelines that applied either linear or adaptive image signal processing. The SPD-NPS measures displayed little evidence of measurement error, and the metrics performed most accurately when they used SPD-NPSs measured from images of scenes. The benefit of deriving SPD-MTFs from images of scenes was traded-off, however, against measurement bias. Most metrics performed most accurately with SPD-MTFs derived from dead leaves signals. Implementing the cCSF or cVPF did not increase metric accuracy. The log NEQ and Visual log NEQ metrics proposed in this thesis were highly competitive, outperforming metrics of the same genre. They were also more consistent than the IEEE P1858 Camera Phone Image Quality (CPIQ) metric when their input parameters were modified. The advantages and limitations of all performance measures and metrics were discussed, as well as their practical implementation and relevant applications

    Classification of breast lesions in ultrasonography using sparse logistic regression and morphology‐based texture features

    Get PDF
    Purpose: This work proposes a new reliable computer‐aided diagnostic (CAD) system for the diagnosis of breast cancer from breast ultrasound (BUS) images. The system can be useful to reduce the number of biopsies and pathological tests, which are invasive, costly, and often unnecessary. Methods: The proposed CAD system classifies breast tumors into benign and malignant classes using morphological and textural features extracted from breast ultrasound (BUS) images. The images are first preprocessed to enhance the edges and filter the speckles. The tumor is then segmented semiautomatically using the watershed method. Having the tumor contour, a set of 855 features including 21 shape‐based, 810 contour‐based, and 24 textural features are extracted from each tumor. Then, a Bayesian Automatic Relevance Detection (ARD) mechanism is used for computing the discrimination power of different features and dimensionality reduction. Finally, a logistic regression classifier computed the posterior probabilities of malignant vs benign tumors using the reduced set of features. Results: A dataset of 104 BUS images of breast tumors, including 72 benign and 32 malignant tumors, was used for evaluation using an eightfold cross‐validation. The algorithm outperformed six state‐of‐the‐art methods for BUS image classification with large margins by achieving 97.12% accuracy, 93.75% sensitivity, and 98.61% specificity rates. Conclusions: Using ARD, the proposed CAD system selects five new features for breast tumor classification and outperforms state‐of‐the‐art, making a reliable and complementary tool to help clinicians diagnose breast cancer

    Design of a high-sensitivity classifier based on a genetic algorithm: application to computer-aided diagnosis

    Full text link
    A genetic algorithm (GA) based feature selection method was developed for the design of high-sensitivity classifiers, which were tailored to yield high sensitivity with high specificity. The fitness function of the GA was based on the receiver operating characteristic (ROC) partial area index, which is defined as the average specificity above a given sensitivity threshold. The designed GA evolved towards the selection of feature combinations which yielded high specificity in the high-sensitivity region of the ROC curve, regardless of the performance at low sensitivity. This is a desirable quality of a classifier used for breast lesion characterization, since the focus in breast lesion characterization is to diagnose correctly as many benign lesions as possible without missing malignancies. The high-sensitivity classifier, formulated as the Fisher's linear discriminant using GA-selected feature variables, was employed to classify 255 biopsy-proven mammographic masses as malignant or benign. The mammograms were digitized at a pixel size of mm, and regions of interest (ROIs) containing the biopsied masses were extracted by an experienced radiologist. A recently developed image transformation technique, referred to as the rubber-band straightening transform, was applied to the ROIs. Texture features extracted from the spatial grey-level dependence and run-length statistics matrices of the transformed ROIs were used to distinguish malignant and benign masses. The classification accuracy of the high-sensitivity classifier was compared with that of linear discriminant analysis with stepwise feature selection . With proper GA training, the ROC partial area of the high-sensitivity classifier above a true-positive fraction of 0.95 was significantly larger than that of , although the latter provided a higher total area under the ROC curve. By setting an appropriate decision threshold, the high-sensitivity classifier and correctly identified 61% and 34% of the benign masses respectively without missing any malignant masses. Our results show that the choice of the feature selection technique is important in computer-aided diagnosis, and that the GA may be a useful tool for designing classifiers for lesion characterization.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/48962/2/m81014.pd

    Metalik yansımalı yĂŒzeylerde otomatik çizik tespiti için görĂŒntĂŒ iƟleme sistemi.

    Get PDF
    In industry, problems due to human error, mechanical flaws and transportation may occur; besides, they need to be detected in fast and efficient ways. In order to eliminate failure of human inspection, automated systems come in action, usually image processing involved. This thesis work, targets one common mass production problem on specular surfaces, i.e. scratch detection. To achieve this, we have implemented two different prototypes. The low-cost system is based on basic line detection, and the mid-end system depends on learning based detection. Both systems are implemented on embedded platforms and performance comparisons are done. Detailed analysis is carried out on computational cost and detection performance. This real-world episode is done on a mechanical prototype in laboratory environmentM.S. - Master of Scienc
    corecore