30 research outputs found

    Generalized Adaptive Fuzzy Rule Interpolation

    Get PDF
    As a substantial extension to fuzzy rule interpolation that works based on two neighbouring rules flanking an observation, adaptive fuzzy rule interpolation is able to restore system consistency when contradictory results are reached during interpolation. The approach first identifies the exhaustive sets of candidates, with each candidate consisting of a set of interpolation procedures which may jointly be responsible for the system inconsistency. Then, individual candidates are modified such that all contradictions are removed and thus interpolation consistency is restored. It has been developed on the assumption that contradictions may only be resulted from the underlying interpolation mechanism, and that all the identified candidates are not distinguishable in terms of their likelihood to be the real culprit. However, this assumption may not hold for real world situations. This paper therefore further develops the adaptive method by taking into account observations, rules and interpolation procedures, all as diagnosable and modifiable system components. Also, given the common practice in fuzzy systems that observations and rules are often associated with certainty degrees, the identified candidates are ranked by examining the certainty degrees of its components and their derivatives. From this, the candidate modification is carried out based on such ranking. This work significantly improves the efficacy of the existing adaptive system by exploiting more information during both the diagnosis and modification processes

    Adaptive Fuzzy Interpolation with Prioritized Component Candidates

    Get PDF
    Adaptive fuzzy interpolation strengthens the potential of fuzzy interpolative reasoning. It first identifies all possible sets of faulty fuzzy reasoning components, termed the candidates, each of which may have led to all the contradictory interpolations. It then tries to modify one selected candidate in an effort to remove all the contradictions and thus restore interpolative consistency. This approach assumes that all the candidates are equally likely to be the real culprit. However, this may not be the case in real situations as certain identified reasoning components may be more liable to resulting in inconsistencies than others. This paper extends the adaptive approach by prioritizing all the generated candidates. This is achieved by exploiting the certainty degrees of fuzzy reasoning components and hence of derived propositions. From this, the candidate with the highest priority is modified first. This extension helps to quickly spot the real culprit and thus considerably improves the approach in terms of efficiency

    Curvature-based sparse rule base generation for fuzzy rule interpolation

    Get PDF
    Fuzzy logic has been successfully widely utilised in many real-world applications. The most common application of fuzzy logic is the rule-based fuzzy inference system, which is composed of mainly two parts including an inference engine and a fuzzy rule base. Conventional fuzzy inference systems always require a rule base that fully covers the entire problem domain (i.e., a dense rule base). Fuzzy rule interpolation (FRI) makes inference possible with sparse rule bases which may not cover some parts of the problem domain (i.e., a sparse rule base). In addition to extending the applicability of fuzzy inference systems, fuzzy interpolation can also be used to reduce system complexity for over-complex fuzzy inference systems. There are typically two methods to generate fuzzy rule bases, i.e., the knowledge driven and data-driven approaches. Almost all of these approaches only target dense rule bases for conventional fuzzy inference systems. The knowledge-driven methods may be negatively affected by the limited availability of expert knowledge and expert knowledge may be subjective, whilst redundancy often exists in fuzzy rule-based models that are acquired from numerical data. Note that various rule base reduction approaches have been proposed, but they are all based on certain similarity measures and are likely to cause performance deterioration along with the size reduction. This project, for the first time, innovatively applies curvature values to distinguish important features and instances in a dataset, to support the construction of a neat and concise sparse rule base for fuzzy rule interpolation. In addition to working in a three-dimensional problem space, the work also extends the natural three-dimensional curvature calculation to problems with high dimensions, which greatly broadens the applicability of the proposed approach. As a result, the proposed approach alleviates the ‘curse of dimensionality’ and helps to reduce the computational cost for fuzzy inference systems. The proposed approach has been validated and evaluated by three real-world applications. The experimental results demonstrate that the proposed approach is able to generate sparse rule bases with less rules but resulting in better performance, which confirms the power of the proposed system. In addition to fuzzy rule interpolation, the proposed curvature-based approach can also be readily used as a general feature selection tool to work with other machine learning approaches, such as classifiers

    Development of Some Spatial-domain Preprocessing and Post-processing Algorithms for Better 2-D Up-scaling

    Get PDF
    Image super-resolution is an area of great interest in recent years and is extensively used in applications like video streaming, multimedia, internet technologies, consumer electronics, display and printing industries. Image super-resolution is a process of increasing the resolution of a given image without losing its integrity. Its most common application is to provide better visual effect after resizing a digital image for display or printing. One of the methods of improving the image resolution is through the employment of a 2-D interpolation. An up-scaled image should retain all the image details with very less degree of blurring meant for better visual quality. In literature, many efficient 2-D interpolation schemes are found that well preserve the image details in the up-scaled images; particularly at the regions with edges and fine details. Nevertheless, these existing interpolation schemes too give blurring effect in the up-scaled images due to the high frequency (HF) degradation during the up-sampling process. Hence, there is a scope to further improve their performance through the incorporation of various spatial domain pre-processing, post-processing and composite algorithms. Therefore, it is felt that there is sufficient scope to develop various efficient but simple pre-processing, post-processing and composite schemes to effectively restore the HF contents in the up-scaled images for various online and off-line applications. An efficient and widely used Lanczos-3 interpolation is taken for further performance improvement through the incorporation of various proposed algorithms. The various pre-processing algorithms developed in this thesis are summarized here. The term pre-processing refers to processing the low-resolution input image prior to image up-scaling. The various pre-processing algorithms proposed in this thesis are: Laplacian of Laplacian based global pre-processing (LLGP) scheme; Hybrid global pre-processing (HGP); Iterative Laplacian of Laplacian based global pre-processing (ILLGP); Unsharp masking based pre-processing (UMP); Iterative unsharp masking (IUM); Error based up-sampling(EU) scheme. The proposed algorithms: LLGP, HGP and ILLGP are three spatial domain preprocessing algorithms which are based on 4th, 6th and 8th order derivatives to alleviate nonuniform blurring in up-scaled images. These algorithms are used to obtain the high frequency (HF) extracts from an image by employing higher order derivatives and perform precise sharpening on a low resolution image to alleviate the blurring in its 2-D up-sampled counterpart. In case of unsharp masking based pre-processing (UMP) scheme, the blurred version of a low resolution image is used for HF extraction from the original version through image subtraction. The weighted version of the HF extracts are superimposed with the original image to produce a sharpened image prior to image up-scaling to counter blurring effectively. IUM makes use of many iterations to generate an unsharp mask which contains very high frequency (VHF) components. The VHF extract is the result of signal decomposition in terms of sub-bands using the concept of analysis filter bank. Since the degradation of VHF components is maximum, restoration of such components would produce much better restoration performance. EU is another pre-processing scheme in which the HF degradation due to image upscaling is extracted and is called prediction error. The prediction error contains the lost high frequency components. When this error is superimposed on the low resolution image prior to image up-sampling, blurring is considerably reduced in the up-scaled images. Various post-processing algorithms developed in this thesis are summarized in following. The term post-processing refers to processing the high resolution up-scaled image. The various post-processing algorithms proposed in this thesis are: Local adaptive Laplacian (LAL); Fuzzy weighted Laplacian (FWL); Legendre functional link artificial neural network(LFLANN). LAL is a non-fuzzy, local based scheme. The local regions of an up-scaled image with high variance are sharpened more than the region with moderate or low variance by employing a local adaptive Laplacian kernel. The weights of the LAL kernel are varied as per the normalized local variance so as to provide more degree of HF enhancement to high variance regions than the low variance counterpart to effectively counter the non-uniform blurring. Furthermore, FWL post-processing scheme with a higher degree of non-linearity is proposed to further improve the performance of LAL. FWL, being a fuzzy based mapping scheme, is highly nonlinear to resolve the blurring problem more effectively than LAL which employs a linear mapping. Another LFLANN based post-processing scheme is proposed here to minimize the cost function so as to reduce the blurring in a 2-D up-scaled image. Legendre polynomials are used for functional expansion of the input pattern-vector and provide high degree of nonlinearity. Therefore, the requirement of multiple layers can be replaced by single layer LFLANN architecture so as to reduce the cost function effectively for better restoration performance. With single layer architecture, it has reduced the computational complexity and hence is suitable for various real-time applications. There is a scope of further improvement of the stand-alone pre-processing and postprocessing schemes by combining them through composite schemes. Here, two spatial domain composite schemes, CS-I and CS-II are proposed to tackle non-uniform blurring in an up-scaled image. CS-I is developed by combining global iterative Laplacian (GIL) preprocessing scheme with LAL post-processing scheme. Another highly nonlinear composite scheme, CS-II is proposed which combines ILLGP scheme with a fuzzy weighted Laplacian post-processing scheme for more improved performance than the stand-alone schemes. Finally, it is observed that the proposed algorithms: ILLGP, IUM, FWL, LFLANN and CS-II are better algorithms in their respective categories for effectively reducing blurring in the up-scaled images

    Image processing system based on similarity/dissimilarity measures to classify binary images from contour-based features

    Get PDF
    Image Processing Systems (IPS) try to solve tasks like image classification or segmentation based on its content. Many authors proposed a variety of techniques to tackle the image classification task. Plenty of methods address the performance of the IPS [1], as long as the influence of many external circumstances, such as illumination, rotation, and noise [2]. However, there is an increasing interest in classifying shapes from binary images (BI). Shape Classification (SC) from BI considers a segmented image as a sample (backgroundsegmentation [3]) and aims to identify objects based in its shape..
    corecore