204 research outputs found

    Image Compression System using ANN

    Get PDF
    The rapid growth of digital imaging applications, including desktop publishing, multimedia, teleconferencing, and high definition television (HDTV) has increased the need for effective and standardized image compression techniques. Among the emerging standards are JPEG, for compression of still images; MPEG, for compression of motion video; and CCITT H.261 (also known as Px64), for compression of video telephony and teleconferencing. All three of these standards employ a basic technique known as the discrete cosine transform (DCT), Developed by Ahmed, Natarajan, and Rao [1974]. Image compression using Discrete Cosine Transform (DCT) is one of the simplest commonly used compression methods. The quality of compressed images, however, is marginally reduced at higher compression ratios due to the lossy nature of DCT compression, thus, the need for finding an optimum DCT compression ratio. An ideal image compression system must yield high quality compressed images with good compression ratio, while maintaining minimum time cost. The neural network associates the image intensity with its compression ratios in search for an optimum ratio

    Modified Distributive Arithmetic based 2D-DWT for Hybrid (Neural Network-DWT) Image Compression

    Get PDF
    Artificial Neural Networks ANN is significantly used in signal and image processing techniques for pattern recognition and template matching Discrete Wavelet Transform DWT is combined with neural network to achieve higher compression if 2D data such as image Image compression using neural network and DWT have shown superior results over classical techniques with 70 higher compression and 20 improvement in Mean Square Error MSE Hardware complexity and power issipation are the major challenges that have been addressed in this work for VLSI implementation In this work modified distributive arithmetic DWT and multiplexer based DWT architecture are designed to reduce the computation complexity of hybrid architecture for image compression A 2D DWT architecture is designed with 1D DWT architecture and is implemented on FPGA that operates at 268 MHz consuming power less than 1

    Non-acyclicity of coset lattices and generation of finite groups

    Get PDF

    Minimize the Percentage of Noise in Biomedical Images Using Neural Networks

    Get PDF
    The overall goal of the research is to improve the quality of biomedical image for telemedicine with minimum percentages of noise in the retrieved image and to take less computation time. The novelty of this technique lies in the implementation of spectral coding for biomedical images using neural networks in order to accomplish the above objectives. This work is in continuity of an ongoing research project aimed at developing a system for efficient image compression approach for telemedicine in Saudi Arabia. We compare the efficiency of this technique against existing image compression techniques, namely, JPEG2000, in terms of compression ratio, peak signal to noise ratio (PSNR), and computation time. To our knowledge, the research is the primary in providing a comparative study with other techniques used in the compression of biomedical images. This work explores and tests biomedical images such as X-rays, computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET)

    Iterative learning control of crystallisation systems

    Get PDF
    Under the increasing pressure of issues like reducing the time to market, managing lower production costs, and improving the flexibility of operation, batch process industries thrive towards the production of high value added commodity, i.e. specialty chemicals, pharmaceuticals, agricultural, and biotechnology enabled products. For better design, consistent operation and improved control of batch chemical processes one cannot ignore the sensing and computational blessings provided by modern sensors, computers, algorithms, and software. In addition, there is a growing demand for modelling and control tools based on process operating data. This study is focused on developing process operation data-based iterative learning control (ILC) strategies for batch processes, more specifically for batch crystallisation systems. In order to proceed, the research took a step backward to explore the existing control strategies, fundamentals, mechanisms, and various process analytical technology (PAT) tools used in batch crystallisation control. From the basics of the background study, an operating data-driven ILC approach was developed to improve the product quality from batch-to-batch. The concept of ILC is to exploit the repetitive nature of batch processes to automate recipe updating using process knowledge obtained from previous runs. The methodology stated here was based on the linear time varying (LTV) perturbation model in an ILC framework to provide a convergent batch-to-batch improvement of the process performance indicator. In an attempt to create uniqueness in the research, a novel hierarchical ILC (HILC) scheme was proposed for the systematic design of the supersaturation control (SSC) of a seeded batch cooling crystalliser. This model free control approach is implemented in a hierarchical structure by assigning data-driven supersaturation controller on the upper level and a simple temperature controller in the lower level. In order to familiarise with other data based control of crystallisation processes, the study rehearsed the existing direct nucleation control (DNC) approach. However, this part was more committed to perform a detailed strategic investigation of different possible structures of DNC and to compare the results with that of a first principle model based optimisation for the very first time. The DNC results in fact outperformed the model based optimisation approach and established an ultimate guideline to select the preferable DNC structure. Batch chemical processes are distributed as well as nonlinear in nature which need to be operated over a wide range of operating conditions and often near the boundary of the admissible region. As the linear lumped model predictive controllers (MPCs) often subject to severe performance limitations, there is a growing demand of simple data driven nonlinear control strategy to control batch crystallisers that will consider the spatio-temporal aspects. In this study, an operating data-driven polynomial chaos expansion (PCE) based nonlinear surrogate modelling and optimisation strategy was presented for batch crystallisation processes. Model validation and optimisation results confirmed this approach as a promise to nonlinear control. The evaluations of the proposed data based methodologies were carried out by simulation case studies, laboratory experiments and industrial pilot plant experiments. For all the simulation case studies a detailed mathematical models covering reaction kinetics and heat mass balances were developed for a batch cooling crystallisation system of Paracetamol in water. Based on these models, rigorous simulation programs were developed in MATLAB®, which was then treated as the real batch cooling crystallisation system. The laboratory experimental works were carried out using a lab scale system of Paracetamol and iso-Propyl alcohol (IPA). All the experimental works including the qualitative and quantitative monitoring of the crystallisation experiments and products demonstrated an inclusive application of various in situ process analytical technology (PAT) tools, such as focused beam reflectance measurement (FBRM), UV/Vis spectroscopy and particle vision measurement (PVM) as well. The industrial pilot scale study was carried out in GlaxoSmithKline Bangladesh Limited, Bangladesh, and the system of experiments was Paracetamol and other powdered excipients used to make paracetamol tablets. The methodologies presented in this thesis provide a comprehensive framework for data-based dynamic optimisation and control of crystallisation processes. All the simulation and experimental evaluations of the proposed approaches emphasised the potential of the data-driven techniques to provide considerable advances in the current state-of-the-art in crystallisation control

    INVESTIGATION OF ORTHOGONAL POLYNOMIAL KERNELS AS SIMILARITY FUNCTIONS FOR PATTERN CLASSIFICATION BY SUPPORT VECTOR MACHINES

    Get PDF
    A kernel function is an important component in the support vector machine (SVM) kernel-based classifier. This is due to the elegant mathematical characteristics of a kernel, which amount to the mapping of non-linearly separable classes to an implicit higher-dimensional feature space where they can become linearly separable, and hence easier to classify. Such characteristics are those prescribed by the underpinning positive semi-definite (PSD) property. The properties of this feature space can, however, be difficult to interpret, to customize or select an appropriate kernel for the classification task at hand. Moreover, the high-dimensionality of the feature space does not usually provide apparent and intuitive information about the natural representations of the data in the input space, as the construction of this feature space is only implicit. On the other hand, SVM kernels have also been regarded as similarity functions in many contexts to measure the resemblance between two patterns, which can be from the same or different classes. However, despite the elegant theory of PSD kernels, and its remarkable implications on the performance of many learning algorithms, limited research efforts seem to have studied kernels from this similarity perspective. Given that patterns from the same class share more similar characteristics than those belonging to different classes, this similarity perspective can therefore provide more tangible means to craft or select appropriate kernels than the properties of the implicit high-dimensional feature spaces that one might not even be able to calculate. This thesis therefore aims to: (i) investigate the similarity-based properties, which can be exploited to characterise kernels (with focus on the so-called “orthogonal polynomial kernels”) when used as similarity functions, and (ii) assess the influence of these properties on the performance of the SVM classifier. An appropriate similarity-based model is therefore defined in the thesis based on how the shape of an SVM kernel should ideally look like when used to measure the similarity between its two inputs. The model proposes that the similarity curve should be maximized when the two kernel inputs are identical, and it should decay monotonically as they differ more and more from each other. Motivated by the pictorial characteristics of the Chebyshev kernels reported in the literature, the thesis adopts this kernel-shape perspective to also study some other orthogonal polynomial kernels (such as the Legendre kernels and Hermite kernels), to underpin the assessment of the proposed ideal shape of the similarity curve for kernel-based pattern classification by SVMs. The analysis of these polynomial kernels revealed that they are naturally constructed from smaller kernel building blocks, which are combined by summation and multiplication operations. A novel similarity fusion framework is therefore developed in this thesis to investigate the effect of these fusion operations on the shape characteristics of the kernels and on their classification performance. This framework is developed in three stages, where Stage 1 kernels are those building blocks constructed from only the polynomial order n (the highest order under consideration), whereas Stage 2 kernels combine all the Stage 1 kernel blocks (from order 0 to n) using a summation fusion operation. The Stage 3 kernels finally combine Stage 2 kernels with another kernel via a multiplication fusion operation. The analysis of the shape characteristics of these three-stage polynomial kernels revealed that their inherent fusion operations are synergistic in nature, as they bring their shapes closer to the ideal similarity function model, and hence enable the calculation of more accurate similarity measures, and accordingly score better classification performance. Experimental results showed that these summative and multiplicative fusion operations improved the classification accuracy by average factors of 17.35% and 19.16%, respectively, depending on the dataset and the polynomial function employed. On the other hand, the shapes of the Stage 2 polynomial kernels have also been shown to oscillate after a certain threshold within the standard normalized input space of [-1,1]. A simple adaptive data normalization approach is therefore proposed to confine the data to the threshold window where these kernels exhibit the sought after ideal shape characteristics, hence eliminate the possibility of any data point to be located outside the range where these oscillations are observed. The implementation of the adaptive data normalization approach accordingly leads to a more accurate calculation of similarity measures and improves the classification performance. When compared to the standard normalized input space, experimental results (performed on the Stage 2 kernels) demonstrate the effectiveness of the proposed adaptive data normalization approach, with an average accuracy improvement factor of 11.772%, depending on the dataset and the polynomial function utilized. Finally, a new perspective is also introduced whereby the utilization of orthogonal polynomials is perceived as a way of transforming the input space to another vector space, of the same dimensionality as the input space, prior to the kernel calculation step. Based on this perspective, a novel processing approach, based on vector concatenation, is proposed which, unlike the previous approaches, ensures that the quantities processed by each polynomial order are always formulated in vector form. This way, the attributes embedded in the structure of the original vectors are maintained intact. The proposed concatenated processing approach can also be used with any polynomial function, regardless of the parity combination of its monomials, whether they are only odd, only even, or a combination of both. Moreover, the Gaussian kernel is also proposed to be evaluated on vectors processed by the polynomial kernels (instead of the linear kernel used in the previous approaches), due to the more accurate similarity shape characteristics of the Gaussian kernel, as well as its renowned ability to implicitly map the input space to a feature space of higher dimensionality. Experimental results demonstrate the superiority of the concatenated approach for all the three polynomial-kernel stages of the developed similarity fusion framework and for all the polynomial functions under investigation. When the Gaussian kernel is evaluated on the vectors processed using the concatenated approach, the observed results show a statistically significant improvement in the average classification accuracy of 22.269%, compared to when the linear kernel is evaluated on the vectors processed using the previously proposed approaches

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF
    A new attribute measuring the contour smoothness of 2-D objects is presented in the context of morphological attribute filtering. The attribute is based on the ratio of the circularity and non-compactness, and has a maximum of 1 for a perfect circle. It decreases as the object boundary becomes irregular. Computation on hierarchical image representation structures relies on five auxiliary data members and is rapid. Contour smoothness is a suitable descriptor for detecting and discriminating man-made structures from other image features. An example is demonstrated on a very-high-resolution satellite image using connected pattern spectra and the switchboard platform
    • …
    corecore