880 research outputs found

    Inverse Kinematics Based on Fuzzy Logic and Neural Networks for the WAM-Titan II Teleoperation System

    Get PDF
    The inverse kinematic problem is crucial for robotics. In this paper, a solution algorithm is presented using artificial intelligence to improve the pseudo-inverse Jacobian calculation for the 7-DOF Whole Arm Manipulator (WAM) and 6-DOF Titan II teleoperation system. An investigation of the inverse kinematics based on fuzzy logic and artificial neural networks for the teleoperation system was undertaken. Various methods such as Adaptive Neural-Fuzzy Inference System (ANFIS), Genetic Algorithms (GA), Multilayer Perceptrons (MLP) Feedforward Networks, Radial Basis Function Networks (RBF) and Generalized Regression Neural Networks (GRNN) were tested and simulated using MATLAB. Each method for identification of the pseudo-inverse problem was tested, and the best method was selected from the simulation results and the error analysis. From the results, the Multilayer Perceptrons with Levenberg-Marquardt (MLP-LM) method had the smallest error and the fastest computation among the other methods. For the WAM-Titan II teleoperation system, the new inverse kinematics calculations for the Titan II were simulated and analyzed using MATLAB. Finally, extensive C code for the alternative algorithm was developed, and the inverse kinematics based on the artificial neural network with LM method is implemented in the real system. The maximum error of Cartesian position was 1.3 inches, and from several trajectories, 75 % of time implementation was achieved compared to the conventional method. Because fast performance of a real time system in the teleoperation is vital, these results show that the new inverse kinematics method based on the MLP-LM is very successful with the acceptable error

    Scribe: A Clustering Approach To Semantic Information Retrieval

    Get PDF
    Information retrieval is the process of fulfilling a user?s need for information by locating items in a data collection that are similar to a complex query that is often posed in natural language. Latent Semantic Indexing (LSI) was the predominant technique employed at the National Institute of Standards and Technology?s Text Retrieval Conference for many years until limitations of its scalability to large data sets were discovered. This thesis describes SCRIBE, a modification of LSI with improved scalability. SCRIBE clusters its semantic index into discrete volumes described by high-dimensional extensions to computer graphics data structures. SCRIBE?s clustering strategy limits the number of items that must be searched and provides for sub-linear time complexity in the number of documents. Experimental results with a large, natural language document collection demonstrate that SCRIBE achieves retrieval accuracy similar to LSI but requires 1/10 the time

    A Mathematical Measurement For Korean Text Mining and Its Application

    Get PDF
    Department of Mathematical SciencesIn modern society we are buried beneath an overwhelming amount of text data on the internet. We are less inclined to just surf the web and pass the time. To solve this problem, especially to grasp part and parcel of the text data we are presented, there have been numerous studies on the relationship between text data and the ease of the perception of the text???s meaning. However, most of the studies focused on English text data. Since most research did not take into account the linguistic characters, these same methods are not suitable for Korean text. Some special method is required to analyze Korean text data utilizing the characteristics of Korean. Thus we are proposing a new framework for Korean text mining in various texts via proper mathematical measurements. The framework is constructed with three parts: 1) text summarization 2) text clustering 3) relational text learning. Text summarization is the method of extracting the essential sentences from the text. As a measure of importance, we propose specific formulas which focus on the characteristics of Korean. These formulas will provide the input features for the fuzzy summarization system. However, this method has a significant defect for large data set. The number of the summarized sentences increases with the word count of a particular text. To solve this, we propose using text clustering. This field has been studied for a long time. It has a tradeo??? of accuracy for speed. Considering the syllable features of Asian linguistics, we have designed ???Syllable Vector??? as a new measurement. It has shown remarkable performance as implemented with text clustering, especially for high accuracy and speed through e???ectively reducing dimensions. Thirdly, we considered the relational feature of text data. The above concepts deal with the document itself. That is, text information has an independent relationship between documents. To handle these relations, we designed a new architecture for text learning using neural networks (NN). Recently, the most remarkable work in natural language processing (NLP) is ???word2vec???, which is built with artificial neural networks. Our proposed model has a learning structure of bipartite layers using meta information between text data, with a focus on citation relationships. This structure reflects the latent topic of the text using the quoted information. It can solve the shortcomings of the conventional system based on the term-document matrix.ope

    Rule model simplification

    Get PDF
    Centre for Intelligent Systems and their ApplicationsDue to its high performance and comprehensibility, fuzzy modelling is becoming more and more popular in dealing with nonlinear, uncertain and complex systems for tasks such as signal processing, medical diagnosis and financial investment. However, there are no principal routine methods to obtain the optimum fuzzy rule base which is not only compact but also retains high prediction (or classification) performance. In order to achieve this, two major problems need to be addressed. First, as the number of input variables increases, the number of possible rules grows exponentially (termed curse of dimensionality). It inevitably deteriorates the transparency of the rule model and can lead to over-fitting, with the model obtaining high performance on the training data but failing to predict the unknown data successfully. Second, gaps may occur in the rule base if the problem is too compact (termed sparse rule base). As a result, it cannot be handled by conventional fuzzy inference such as Mamdani. This Ph.D. work proposes a rule base simplification method and a family of fuzzy interpolation methods to solve the aforementioned two problems. The proposed simplification method reduces the rule base complexity via Retrieving Data from Rules (RDFR). It first retrieves a collection of new data from an original rule base. Then the new data is used for re-training to build a more compact rule model. This method has four advantages: 1) It can simplify rule bases without using the original training data, but is capable of dealing with combinations of rules and data. 2) It can integrate with any rule induction or reduction schemes. 3) It implements the similarity merging and inconsistency removal approaches. 4) It can make use of rule weights. Illustrative examples have been given to demonstrate the potential of this work. The second part of the work concerns the development of a family of transformation based fuzzy interpolation methods (termed HS methods). These methods first introduce the general concept of representative values (RVs), and then use this to interpolate fuzzy rules involving arbitrary polygonal fuzzy sets, by means of scale and move transformations. This family consists of two sub-categories: namely, the original HS methods and the enhanced HS methods. The HS methods not only inherit the common advantages of fuzzy interpolative reasoning -- helping reduce rule base complexity and allowing inferences to be performed within simple and sparse rule bases -- but also have two other advantages compared to the existing fuzzy interpolation methods. Firstly, they provide a degree of freedom to choose various RV definitions to meet different application requirements. Secondly, they can handle the interpolation of multiple rules, with each rule having multiple antecedent variables associated with arbitrary polygonal fuzzy membership functions. This makes the interpolation inference a practical solution for real world applications. The enhanced HS methods are the first proposed interpolation methods which preserve piece-wise linearity, which may provide a solution to solve the interpolation problem in a very high Cartesian space in the mathematics literature. The RDFR-based simplification method has been applied to a variety of applications including nursery prediction, the Saturday morning problem and credit application. HS methods have been utilized in truck backer-upper control and computer hardware prediction. The former demonstrates the simplification potential of the HS methods, while the latter shows their capability in dealing with sparse rule bases. The RDFR-based simplification method and HS methods are further integrated into a novel model simplification framework, which has been applied to a scaled-up application (computer activity prediction). In the experimental studies, the proposed simplification framework leads to very good fuzzy rule base reductions whilst retaining, or improving, performance

    Data Science: Measuring Uncertainties

    Get PDF
    With the increase in data processing and storage capacity, a large amount of data is available. Data without analysis does not have much value. Thus, the demand for data analysis is increasing daily, and the consequence is the appearance of a large number of jobs and published articles. Data science has emerged as a multidisciplinary field to support data-driven activities, integrating and developing ideas, methods, and processes to extract information from data. This includes methods built from different knowledge areas: Statistics, Computer Science, Mathematics, Physics, Information Science, and Engineering. This mixture of areas has given rise to what we call Data Science. New solutions to the new problems are reproducing rapidly to generate large volumes of data. Current and future challenges require greater care in creating new solutions that satisfy the rationality for each type of problem. Labels such as Big Data, Data Science, Machine Learning, Statistical Learning, and Artificial Intelligence are demanding more sophistication in the foundations and how they are being applied. This point highlights the importance of building the foundations of Data Science. This book is dedicated to solutions and discussions of measuring uncertainties in data analysis problems

    Low-level interpretability and high-level interpretability: a unified view of data-driven interpretable fuzzy system modelling

    Get PDF
    This paper aims at providing an in-depth overview of designing interpretable fuzzy inference models from data within a unified framework. The objective of complex system modelling is to develop reliable and understandable models for human being to get insights into complex real-world systems whose first-principle models are unknown. Because system behaviour can be described naturally as a series of linguistic rules, data-driven fuzzy modelling becomes an attractive and widely used paradigm for this purpose. However, fuzzy models constructed from data by adaptive learning algorithms usually suffer from the loss of model interpretability. Model accuracy and interpretability are two conflicting objectives, so interpretation preservation during adaptation in data-driven fuzzy system modelling is a challenging task, which has received much attention in fuzzy system modelling community. In order to clearly discriminate the different roles of fuzzy sets, input variables, and other components in achieving an interpretable fuzzy model, a taxonomy of fuzzy model interpretability is first proposed in terms of low-level interpretability and high-level interpretability in this paper. The low-level interpretability of fuzzy models refers to fuzzy model interpretability achieved by optimizing the membership functions in terms of semantic criteria on fuzzy set level, while the high-level interpretability refers to fuzzy model interpretability obtained by dealing with the coverage, completeness, and consistency of the rules in terms of the criteria on fuzzy rule level. Some criteria for low-level interpretability and high-level interpretability are identified, respectively. Different data-driven fuzzy modelling techniques in the literature focusing on the interpretability issues are reviewed and discussed from the perspective of low-level interpretability and high-level interpretability. Furthermore, some open problems about interpretable fuzzy models are identified and some potential new research directions on fuzzy model interpretability are also suggested. Crown Copyright Ā© 2008

    Fast and robust image feature matching methods for computer vision applications

    Get PDF
    Service robotic systems are designed to solve tasks such as recognizing and manipulating objects, understanding natural scenes, navigating in dynamic and populated environments. It's immediately evident that such tasks cannot be modeled in all necessary details as easy as it is with industrial robot tasks; therefore, service robotic system has to have the ability to sense and interact with the surrounding physical environment through a multitude of sensors and actuators. Environment sensing is one of the core problems that limit the deployment of mobile service robots since existing sensing systems are either too slow or too expensive. Visual sensing is the most promising way to provide a cost effective solution to the mobile robot sensing problem. It's usually achieved using one or several digital cameras placed on the robot or distributed in its environment. Digital cameras are information rich sensors and are relatively inexpensive and can be used to solve a number of key problems for robotics and other autonomous intelligent systems, such as visual servoing, robot navigation, object recognition, pose estimation, and much more. The key challenges to taking advantage of this powerful and inexpensive sensor is to come up with algorithms that can reliably and quickly extract and match the useful visual information necessary to automatically interpret the environment in real-time. Although considerable research has been conducted in recent years on the development of algorithms for computer and robot vision problems, there are still open research challenges in the context of the reliability, accuracy and processing time. Scale Invariant Feature Transform (SIFT) is one of the most widely used methods that has recently attracted much attention in the computer vision community due to the fact that SIFT features are highly distinctive, and invariant to scale, rotation and illumination changes. In addition, SIFT features are relatively easy to extract and to match against a large database of local features. Generally, there are two main drawbacks of SIFT algorithm, the first drawback is that the computational complexity of the algorithm increases rapidly with the number of key-points, especially at the matching step due to the high dimensionality of the SIFT feature descriptor. The other one is that the SIFT features are not robust to large viewpoint changes. These drawbacks limit the reasonable use of SIFT algorithm for robot vision applications since they require often real-time performance and dealing with large viewpoint changes. This dissertation proposes three new approaches to address the constraints faced when using SIFT features for robot vision applications, Speeded up SIFT feature matching, robust SIFT feature matching and the inclusion of the closed loop control structure into object recognition and pose estimation systems. The proposed methods are implemented and tested on the FRIEND II/III service robotic system. The achieved results are valuable to adapt SIFT algorithm to the robot vision applications
    • ā€¦
    corecore