147 research outputs found

    Estimation and identification for 2-D block Kalman filtering

    Get PDF
    Includes bibliographical references.This correspondence is concerned with the development of a recursive identification and estimation procedure for 2-D block Kalman filtering. The recursive identification scheme can be used on-line to update the image model parameters at each iteration based upon the local statistics within a block of the observed noisy image. The covariance matrix of the driving noise can also be estimated at each iteration of this algorithm. A recursive procedure is given for computing the parameters of the higher order models. Simulation results are also provided

    Constrained least-squares digital image restoration

    Get PDF
    The design of a digital image restoration filter must address four concerns: the completeness of the underlying imaging system model, the validity of the restoration metric used to derive the filter, the computational efficiency of the algorithm for computing the filter values and the ability to apply the filter in the spatial domain. Consistent with these four concerns, this dissertation presents a constrained least-squares (CLS) restoration filter for digital image restoration. The CLS restoration filter is based on a comprehensive, continuous-input/discrete- processing/continuous-output (c/d/c) imaging system model that accounts for acquisition blur, spatial sampling, additive noise and imperfect image reconstruction. The c/d/c model-based CLS restoration filter can be applied rigorously and is easier to compute than the corresponding c/d/c model-based Wiener restoration filter. The CLS restoration filter can be efficiently implemented in the spatial domain as a small convolution kernel. Simulated restorations are used to illustrate the CLS filter\u27s performance for a range of imaging conditions. Restoration studies based, in part, on an actual Forward Looking Infrared (FLIR) imaging system, show that the CLS restoration filter can be used for effective range reduction. The CLS restoration filter is also successfully tested on blurred and noisy radiometric images of the earth\u27s outgoing radiation field from a satellite-borne scanning radiometer used by the National Aeronautics and Space Administration (NASA) for atmospheric research

    Superresolution and Synthetic Aperture Radar

    Full text link

    3-dimensional median-based algorithms in image sequence processing

    Get PDF
    Ankara : Department of Electrical and Electronics Engineering and the Institute of Engineering and Sciences of Bilkent University, 1990.Thesis (Master's) -- Bilkent University, 1990.Includes bibliographical references leaves 75-78.This thesis introduces new 3-dimensional median-based algorithms to be used in two of the main research areas in image sequence proc(',ssi,ng; image sequence enhancement and image sequence coding. Two new nonlinear filters are developed in the field of image sequence enhancement. The motion performances and the output statistics of these filters are evaluated. The simulations show that the filters improve the image quality to a large extent compared to other examples from the literature. The second field addressed is image sequence coding. A new 3-dimensional median-based coding and decoding method is developed for stationary images with the aim of good slow motion performance. All the algorithms developed are simulated on real image sequences using a video sequencer.Alp, Münire BilgeM.S

    Applications of fuzzy counterpropagation neural networks to non-linear function approximation and background noise elimination

    Get PDF
    An adaptive filter which can operate in an unknown environment by performing a learning mechanism that is suitable for the speech enhancement process. This research develops a novel ANN model which incorporates the fuzzy set approach and which can perform a non-linear function approximation. The model is used as the basic structure of an adaptive filter. The learning capability of ANN is expected to be able to reduce the development time and cost of the designing adaptive filters based on fuzzy set approach. A combination of both techniques may result in a learnable system that can tackle the vagueness problem of a changing environment where the adaptive filter operates. This proposed model is called Fuzzy Counterpropagation Network (Fuzzy CPN). It has fast learning capability and self-growing structure. This model is applied to non-linear function approximation, chaotic time series prediction and background noise elimination

    Investigation of the effects of image compression on the geometric quality of digital protogrammetric imagery

    Get PDF
    We are living in a decade, where the use of digital images is becoming increasingly important. Photographs are now converted into digital form, and direct acquisition of digital images is becoming increasing important as sensors and associated electronics. Unlike images in analogue form, digital representation of images allows visual information to· be easily manipulated in useful ways. One practical problem of the digital image representation is that, it requires a very large number of bits and hence one encounters a fairly large volume of data in a digital production environment if they are stored uncompressed on the disk. With the rapid advances in sensor technology and digital electronics, the number of bits grow larger in softcopy photogrammetry, remote sensing and multimedia GIS. As a result, it is desirable to find efficient representation for digital images in order to reduce the memory required for storage, improve the data access rate from storage devices, and reduce the time required for transfer across communication channels. The component of digital image processing that deals with this problem is called image compression. Image compression is a necessity for the utilisation of large digital images in softcopy photogrammetry, remote sensing, and multimedia GIS. Numerous image Compression standards exist today with the common goal of reducing the number of bits needed to store images, and to facilitate the interchange of compressed image data between various devices and applications. JPEG image compression standard is one alternative for carrying out the image compression task. This standard was formed under the auspices ISO and CCITT for the purpose of developing an international standard for the compression and decompression of continuous-tone, still-frame, monochrome and colour images. The JPEG standard algorithm &Us into three general categories: the baseline sequential process that provides a simple and efficient algorithm for most image coding applications, the extended DCT-based process that allows the baseline system to satisfy a broader range of applications, and an independent lossless process for application demanding that type of compression. This thesis experimentally investigates the geometric degradations resulting from lossy JPEG compression on photogrammetric imagery at various levels of quality factors. The effects and the suitability of JPEG lossy image compression on industrial photogrammetric imagery are investigated. Examples are drawn from the extraction of targets in close-range photogrammetric imagery. In the experiments, the JPEG was used to compress and decompress a set of test images. The algorithm has been tested on digital images containing various levels of entropy (a measure of information content of an image) with different image capture capabilities. Residual data was obtained by taking the pixel-by-pixel difference between the original data and the reconstructed data. The image quality measure, root mean square (rms) error of the residual was used as a quality measure to judge the quality of images produced by JPEG(DCT-based) image compression technique. Two techniques, TIFF (IZW) compression and JPEG(DCT-based) compression are compared with respect to compression ratios achieved. JPEG(DCT-based) yields better compression ratios, and it seems to be a good choice for image compression. Further in the investigation, it is found out that, for grey-scale images, the best compression ratios were obtained when the quality factors between 60 and 90 were used (i.e., at a compression ratio of 1:10 to 1:20). At these quality factors the reconstructed data has virtually no degradation in the visual and geometric quality for the application at hand. Recently, many fast and efficient image file formats have also been developed to store, organise and display images in an efficient way. Almost every image file format incorporates some kind of compression method to manage data within common place networks and storage devices. The current major file formats used in softcopy photogrammetry, remote sensing and · multimedia GIS. were also investigated. It was also found out that the choice of a particular image file format for a given application generally involves several interdependent considerations including quality; flexibility; computation; storage, or transmission. The suitability of a file format for a given purpose is · best determined by knowing its original purpose. Some of these are widely used (e.g., TIFF, JPEG) and serve as exchange formats. Others are adapted to the needs of particular applications or particular operating systems

    Single photon emission computed tomography: performance assessment, development and clinical applications

    Get PDF
    This is a general investigation of the SPECT imaging process. The primary aim is to determine the manner in which the SPECT studies should be performed in order to maximise the relevant clinical information given the characteristics and limitations of the particular gamma camera imaging system used. Thus the first part of this thesis is concerned with an assessment of the performance characteristics of the SPECT system itself. This involves the measurement of the fundamental planar imaging properties of the camera, their stability with rotation, the ability of the camera to rotate in a perfect circle and the accuracy of the transfer of the information from the camera to the computing system. Following this the performance of the SPECT system as a whole is optimised. This is achieved by examining the fundamental aspects of the SPECT imaging process and by optimising the selection of the parameters chosen for the acquisition and reconstruction of the data. As an aid to this a novel mathematical construct is introduced. By taking the logarithm of the power spectrum of the normalised projection profile data the relationship between the signal power and the noise power in the detected data can be visualised. From a theoretical consideration of the available options the Butterworth filter is chosen for use because it provides the best combination of spatial frequency transfer characteristics and flexibility. The flexibility of the Butterworth filter is an important feature because it means that the form of the actual function used in the reconstruction of a transaxial section can be chosen with regard to the relationship between the signal and the noise in the data. A novel method is developed to match the filter to the projection data. This consists of the construction of a mean angular power spectrum from the set of projection profiles required for the reconstruction of the particular transaxial section in question. From this the spatial frequency at which the the signal becomes dominated by the noise is identified. The value which the Butterworth filter should take at this point can then be determined with regard to the requirements of the particular clinical investigation to be performed. The filter matching procedure can be extended to two dimensions in a practical manner by operating on the projection data after it has been filtered in the y direction. The efficacy of several methods to correct for the effects of scatter, attenuation and camera non-uniformity are also investigated. Having developed the optimised methodology for the acquisition and reconstruction of the SPECT data the results which are obtained are applied in the investigation of some specific clinical problems. The assessment of intractable epilepsy using 99mTc-HMPAO is performed followed by the investigation of ischaemic heart disease using 99mTc-MIBI and finally, the diagnosis of avascular necrosis of the femoral head using 99mTc-MDP is studied. The SPECT studies described in this thesis make a significant contribution to patient management
    corecore