38 research outputs found

    Speedup of Interval Type 2 Fuzzy Logic Systems Based on GPU for Robot Navigation

    Get PDF
    As the number of rules and sample rate for type 2 fuzzy logic systems (T2FLSs) increases, the speed of calculations becomes a problem. The T2FLS has a large membership value of inherent algorithmic parallelism that modern CPU architectures do not exploit. In the T2FLS, many rules and algorithms can be speedup on a graphics processing unit (GPU) as long as the majority of computation a various stages and components are not dependent on each other. This paper demonstrates how to install interval type 2 fuzzy logic systems (IT2-FLSs) on the GPU and experiments for obstacle avoidance behavior of robot navigation. GPU-based calculations are high-performance solution and free up the CPU. The experimental results show that the performance of the GPU is many times faster than CPU

    Speedup of Interval Type 2 Fuzzy Logic Systems Based on GPU for Robot Navigation

    Get PDF
    As the number of rules and sample rate for type 2 fuzzy logic systems (T2FLSs) increases, the speed of calculations becomes a problem. The T2FLS has a large membership value of inherent algorithmic parallelism that modern CPU architectures do not exploit. In the T2FLS, many rules and algorithms can be speedup on a graphics processing unit (GPU) as long as the majority of computation a various stages and components are not dependent on each other. This paper demonstrates how to install interval type 2 fuzzy logic systems (IT2-FLSs) on the GPU and experiments for obstacle avoidance behavior of robot navigation. GPUbased calculations are high-performance solution and free up the CPU. The experimental results show that the performance of the GPU is many times faster than CPU

    Learning and recognition by a dynamical system with a plastic velocity field

    Get PDF
    Learning is a mechanism intrinsic to all sentient biological systems. Despite the diverse range of paradigms that exist, it appears that an artificial system has yet to be developed that can emulate learning with a comparable degree of accuracy or efficiency to the human brain. With the development of new approaches comes the opportunity to reduce this disparity in performance. A model presented by Janson and Marsden [arXiv:1107.0674 (2011)] (Memory foam model) redefines the critical features that an intelligent system should demonstrate. Rather than focussing on the topological constraints of the rigid neuron structure, the emphasis is placed on the on-line, unsupervised, classification, retention and recognition of stimuli. In contrast to traditional AI approaches, the system s memory is not plagued by spurious attractors or the curse of dimensionality. The ability to continuously learn, whilst simultaneously recognising aspects of a stimuli ensures that this model more closely embodies the operations occurring in the brain than many other AI approaches. Here we consider the pertinent deficiencies of classical artificial learning models before introducing and developing this memory foam self-shaping system. As this model is relatively new, its limitations are not yet apparent. These must be established by testing the model in various complex environments. Here we consider its ability to learn and recognize the RGB colours composing cartoons as observed via a web-camera. The self-shaping vector field of the system is shown to adjust its composition to reflect the distribution of three-dimensional inputs. The model builds a memory of its experiences and is shown to recognize unfamiliar colours by locating the most appropriate class with which to associate a stimuli. In addition, we discuss a method to map a three-dimensional RGB input onto a line spectrum of colours. The corresponding reduction of the models dimensions is shown to dramatically improve computational speed, however, the model is then restricted to a much smaller set of representable colours. This models prototype offers a gradient description of recognition, it is evident that a more complex, non-linear alternative may be used to better characterize the classes of the system. It is postulated that non-linear attractors may be utilized to convey the concept of hierarchy that relates the different classes of the system. We relate the dynamics of the van der Pol oscillator to this plastic self-shaping system, first demonstrating the recognition of stimuli with limit cycle trajectories. The location and frequency of each cycle is dependent on the topology of the systems energy potential. For a one-dimensional stimuli the dynamics are restricted to the cycle, the extension of the model to an N dimensional stimuli is approached via the coupling of N oscillators. Here we study systems of up to three mutually coupled oscillators and relate limit cycles, fixed points and quasi-periodic orbits to the recognition of stimuli

    Automated Characterisation and Classification of Liver Lesions From CT Scans

    Get PDF
    Cancer is a general term for a wide range of diseases that can affect any part of the body due to the rapid creation of abnormal cells that grow outside their normal boundaries. Liver cancer is one of the common diseases that cause the death of more than 600,000 each year. Early detection is important to diagnose and reduce the incidence of death. Examination of liver lesions is performed with various medical imaging modalities such as Ultrasound (US), Computer tomography (CT), and Magnetic resonance imaging (MRI). The improvements in medical imaging and image processing techniques have significantly enhanced the interpretation of medical images. Computer-Aided Diagnosis (CAD) systems based on these techniques play a vital role in the early detection of liver disease and hence reduce liver cancer death rate. Moreover, CAD systems can help physician, as a second opinion, in characterising lesions and making the diagnostic decision. Thus, CAD systems have become an important research area. Particularly, these systems can provide diagnostic assistance to doctors to improve overall diagnostic accuracy. The traditional methods to characterise liver lesions and differentiate normal liver tissues from abnormal ones are largely dependent on the radiologists experience. Thus, CAD systems based on the image processing and artificial intelligence techniques gained a lot of attention, since they could provide constructive diagnosis suggestions to clinicians for decision making. The liver lesions are characterised through two ways: (1) Using a content-based image retrieval (CBIR) approach to assist the radiologist in liver lesions characterisation. (2) Calculating the high-level features that describe/ characterise the liver lesion in a way that is interpreted by humans, particularly Radiologists/Clinicians, based on the hand-crafted/engineered computational features (low-level features) and learning process. However, the research gap is related to the high-level understanding and interpretation of the medical image contents from the low-level pixel analysis, based on mathematical processing and artificial intelligence methods. In our work, the research gap is bridged if a relation of image contents to medical meaning in analogy to radiologist understanding is established. This thesis explores an automated system for the classification and characterisation of liver lesions in CT scans. Firstly, the liver is segmented automatically by using anatomic medical knowledge, histogram-based adaptive threshold and morphological operations. The lesions and vessels are then extracted from the segmented liver by applying AFCM and Gaussian mixture model through a region growing process respectively. Secondly, the proposed framework categorises the high-level features into two groups; the first group is the high-level features that are extracted from the image contents such as (Lesion location, Lesion focality, Calcified, Scar, ...); the second group is the high-level features that are inferred from the low-level features through machine learning process to characterise the lesion such as (Lesion density, Lesion rim, Lesion composition, Lesion shape,...). The novel Multiple ROIs selection approach is proposed, in which regions are derived from generating abnormality level map based on intensity difference and the proximity distance for each voxel with respect to the normal liver tissue. Then, the association between low-level, high-level features and the appropriate ROI are derived by assigning the ability of each ROI to represents a set of lesion characteristics. Finally, a novel feature vector is built, based on high-level features, and fed into SVM for lesion classification. In contrast with most existing research, which uses low-level features only, the use of high-level features and characterisation helps in interpreting and explaining the diagnostic decision. The methods are evaluated on a dataset containing 174 CT scans. The experimental results demonstrated that the efficacy of the proposed framework in the successful characterisation and classification of the liver lesions in CT scans. The achieved average accuracy was 95:56% for liver lesion characterisation. While the lesion’s classification accuracy was 97:1% for the entire dataset. The proposed framework is developed to provide a more robust and efficient lesion characterisation framework through comprehensions of the low-level features to generate semantic features. The use of high-level features (characterisation) helps in better interpretation of CT liver images. In addition, the difference-of-features using multiple ROIs were developed for robust capturing of lesion characteristics in a reliable way. This is in contrast to the current research trend of extracting the features from the lesion only and not paying much attention to the relation between lesion and surrounding area. The design of the liver lesion characterisation framework is based on the prior knowledge of the medical background to get a better and clear understanding of the liver lesion characteristics in medical CT images

    The art of clustering bandits.

    Get PDF
    Multi-armed bandit problems are receiving a great deal of attention because they adequately formalize the exploration-exploitation trade-offs arising in several industrially relevant applications, such as online advertisement and, more generally, recommendation systems. In many cases, however, these applications have a strong social component, whose integration in the bandit algorithms could lead to a dramatic performance increase. For instance, we may want to serve content to a group of users by taking advantage of an underlying network of social relationships among them. The purpose of this thesis is to introduce novel and principled algorithmic approaches to the solution of such networked bandit problems. Starting from a global (Laplacian-based) strategy which allocates a bandit algorithm to each network node (user), and allows it to "share" signals (contexts and payoffs) with the neghboring nodes, our goal is to derive and experimentally test more scalable approaches based on different ways of clustering the graph nodes. More importantly, we shall investigate the case when the graph structure is not given ahead of time, and has to be inferred based on past user behavior. A general difficulty arising in such practical scenarios is that data sequences are typically nonstationary, implying that traditional statistical inference methods should be used cautiously, possibly replacing them with by more robust nonstochastic (e.g., game-theoretic) inference methods. In this thesis, we will firstly introduce the centralized clustering bandits. Then, we propose the corresponding solution in decentralized scenario. After that, we explain the generic collaborative clustering bandits. Finally, we extend and showcase the state-of-the-art clustering bandits that we developed in the quantification problem

    COMPUTER-SUPPORTED COLLABORATIVE KNOWLEDGE BUILDING IN ENGINEERING DESIGN

    Get PDF
    Engineering design is defined as a process of devising a technical system, component, or process to satisfy desired needs. Collaborative engineering design (CED) is a knowledge- intensive process that involves multidisciplinary people working jointly, sharing resources and outcomes, and building new knowledge while solving problems. People need to collaborate synchronously or asynchronously, either in the same place or distributed geographically. This thesis proposes that engineering design can be modeled not only as a process of knowledge transformation, but as a process of collaborative knowledge building (CKB). CKB is a goal-driven collaborative process of generating and refining ideas and concepts of value to the community. Properly applied and supported, CKB has the potential to improve both learning and design outcomes resulting from collaborative design projects. Existing collaboration tools have evolved without a clear understanding of designers’ needs, even though a portion of the required functionalities has been achieved separately. This thesis proposes an integrated CKB-orientated model for collaborative engineering design, incorporating the key elements of Stahl’s CKB model, Lu’s ECN-based collaborative engineering model, Nonaka’s knowledge creation theory, and Sim and Duffy’s model of a design activity. Based on the model, a set of specific requirements for collaboration tools are presented and some functionalities not existing currently are identified

    Teachers' learning styles : their effect on teaching styles

    Get PDF
    4, 107 leaves ; 29 cm.No abstract
    corecore