215 research outputs found

    Numerical Fitting-based Likelihood Calculation to Speed up the Particle Filter

    Get PDF
    The likelihood calculation of a vast number of particles is the computational bottleneck for the particle filter in applications where the observation information is rich. For fast computing the likelihood of particles, a numerical fitting approach is proposed to construct the Likelihood Probability Density Function (Li-PDF) by using a comparably small number of so-called fulcrums. The likelihood of particles is thereby analytically inferred, explicitly or implicitly, based on the Li-PDF instead of directly computed by utilizing the observation, which can significantly reduce the computation and enables real time filtering. The proposed approach guarantees the estimation quality when an appropriate fitting function and properly distributed fulcrums are used. The details for construction of the fitting function and fulcrums are addressed respectively in detail. In particular, to deal with multivariate fitting, the nonparametric kernel density estimator is presented which is flexible and convenient for implicit Li-PDF implementation. Simulation comparison with a variety of existing approaches on a benchmark 1-dimensional model and multi-dimensional robot localization and visual tracking demonstrate the validity of our approach.Comment: 42 pages, 17 figures, 4 tables and 1 appendix. This paper is a draft/preprint of one paper submitted to the IEEE Transaction

    Evolving temporal fuzzy association rules from quantitative data with a multi-objective evolutionary algorithm

    Get PDF
    A novel method for mining association rules that are both quantitative and temporal using a multi-objective evolutionary algorithm is presented. This method successfully identifies numerous temporal association rules that occur more frequently in areas of a dataset with specific quantitative values represented with fuzzy sets. The novelty of this research lies in exploring the composition of quantitative and temporal fuzzy association rules and the approach of using a hybridisation of a multi-objective evolutionary algorithm with fuzzy sets. Results show the ability of a multi-objective evolutionary algorithm (NSGA-II) to evolve multiple target itemsets that have been augmented into synthetic datasets

    Analogue mouse pointer control via an online steady state visual evoked potential (SSVEP) brain-computer interface

    Get PDF
    The steady state visual evoked protocol has recently become a popular paradigm in brain–computer interface (BCI) applications. Typically (regardless of function) these applications offer the user a binary selection of targets that perform correspondingly discrete actions. Such discrete control systems are appropriate for applications that are inherently isolated in nature, such as selecting numbers from a keypad to be dialled or letters from an alphabet to be spelled. However motivation exists for users to employ proportional control methods in intrinsically analogue tasks such as the movement of a mouse pointer. This paper introduces an online BCI in which control of a mouse pointer is directly proportional to a user's intent. Performance is measured over a series of pointer movement tasks and compared to the traditional discrete output approach. Analogue control allowed subjects to move the pointer faster to the cued target location compared to discrete output but suffers more undesired movements overall. Best performance is achieved when combining the threshold to movement of traditional discrete techniques with the range of movement offered by proportional control

    Special issue: Advances in learning schemes for function approximation

    Get PDF
    The eleven papers included in this special issue represent a selection of extended contributions presented at the 11th International Conference on Intelligent Systems Design and Applications (ISDA) held in Córdoba, Spain November 22–24, 2011. Papers were selected on the basis of fundamental ideas and concepts rather than the direct usage of well-established techniques. This special issue is then aimed at practitioners, researchers and postgraduate students, who are engaged in developing and applying, advanced Intelligent Systems to solving real-world problems in the Industrial and Environmental fields. The papers are organized as follows. In the first contribution, Barros et al., propose a novel Bottom-Up Oblique Decision-Tree Induction Framework called BUTIF. BUTIF does not rely on an impurity-measure for dividing nodes, since the data resulting from each split is known a priori. BUTIF allows the adoption of distinct clustering algorithms and binary classifiers, respectively, for generating the initial leaves of the tree and the splitting hyperplanes in its internal nodes. It is also capable of performing embedded feature selection, which may reduce the number of features in each hyperplane, thus improving model comprehension. Different from virtually every top-down decision-tree induction algorithm, BUTIF does not require the further execution of a pruning procedure in order to avoid overfitting, due to its bottom-up nature that does not overgrow the tree. Empirical results show the effectiveness of the proposed framework. The second contribution by Bolón-Canedo et al., propose an ensemble of filters for classification, aimed at achieving a good classification performance together with a reduction in the input dimensionality. This approach overcomes the problem of selecting an appropriate method for each problem at hand, as it is overly dependent on the characteristics of the datasets. The adequacy of using an ensemble of filters rather than a single filter was demonstrated on synthetic and real data, paving the way for its final application over a challenging scenario such as DNA microarray classification. Cruz-Ramírez et al., in the sequel present a study of the use of a multi-objective optimization approach in the context of ordinal classification and propose a new performance metric, the Maximum Mean Absolute Error (MMAE). MMAE considers per-class distribution of patterns and the magnitude of the errors, both issues being crucial for ordinal regression problems. In addition the authors empirically show that some of the performance metrics are competitive objectives, which justifies the use of multi-objective optimization strategies. In this study, a multi-objective evolutionary algorithm optimizes a artificial neural network ordinal model with different pairs of metrics combinations, concluding that the pair of the Mean Absolute Error (MAE) and the proposed MMAE is the most favorable. A study of the relationship between the metrics of this proposal is performed, and the graphical representation in the 2 dimensional space where the search of the evolutionary algorithm takes place is analyzed. The results obtained show a good classification performance, opening new lines of research in the evaluation and model selection of ordinal classifiers. In the fourth contribution, Cateni et al., present a novel resampling method for binary classification problems on imbalanced datasets combining an oversampling and an undersampling technique. Several tests have been developed aiming at assessing the efficiency of the proposed method. Four classifiers based, respectively, on Support Vector Machine, Decision Tree, labeled Self Organizing Map and Bayesian Classifiers have been developed and applied for binary classification on the following four datasets: a synthetic dataset, a widely used public dataset and two datasets coming from industrial applications. In the sequel, Ibañez et al., propose two greedy wrapper forward cost-sensitive selective naive Bayes approaches. Both approaches readjust the probability thresholds of each class to select the class with the minimum expected cost. The first algorithm (CS-SNB-Accuracy) considers adding each variable to the model and measures the performance of the resulting model on the training data. In contrast, the second algorithm (CS-SNB-Cost) considers adding variables that reduce the misclassification cost, that is, the distance between the readjusted class and actual class. The authors tested the algorithms on the bibliometric indices prediction area. Considering the popularity of the well-known h-index, they have researched and built several prediction models to forecast the annual increase of the h-index for Neurosciences journals in a four-year time horizon. Results show that the approaches, particularly CS-SNB-Accuracy, often achieved higher accuracy values than other Bayesian classifiers. Furthermore, it has been also noted that the CS-SNB-Cost almost always achieved a lower average cost than the analyzed standard classifiers. These cost-sensitive selective naive Bayes approaches outperform the selective naive Bayes in terms of accuracy and average cost, so the cost-sensitive learning approach could be also applied in different probabilistic classification approaches. Sobrino et al., in the sixth paper approach causal questions with the aim of: (1) answering what-questions as identifying the cause of an effect; (2) answering how-questions as selecting an appropriate part of a mechanism that relates pairs of cause effect (3) answering why-questions as identifying central causes in the mechanism which answer how-questions. To automatically get answers to why-questions, the authors hypothesize that the deepest knowledge associated to them can be obtained from the central nodes of the graph that schematizes the mechanism. This contribution is concerned with medical question answering systems, even though that this approach does not address how to retrieve medical documents as a primary answer to a question, but how to extract relevant causal answers from a given document previously extracted by using a search engine. Thus, this research deals with the automatic detection and extraction of causal relations from medical documents. In the seventh paper, Sleiman and Corchuelo propose a hybrid approach that explores the use of standard machine-learning techniques to extract web information. The results illustrate that the proposal outperforms three state-of-the-art techniques in the literature, which opens up quite a new approach for information extraction. García-Hernández et al. in the eighth paper, present a hybrid system for incorporating human expert knowledge into the unequal area facility layout problem. A subset of facility designs is generated using a genetic algorithm and then, evaluated by a human expert. The hybrid system consists of assigning a mark, where the principal aim is to substitute the human expert′s knowledge to, avoid fatiguing or burdening him or her. The novel proposed approach was tested using a real case study of 365 facility layout designs for an ovine slaughterhouse. The validation phase of the intelligent model presented was performed using a new subset of 181 facility layout designs evaluated by a different human expert. The results of the experiment, which validate the proposed approach, are presented and discussed in this study. Kang et al. in the sequel present an effective control method based on adaptive PID neural network and particle swarm optimization (PSO) algorithm. PSO algorithm is introduced to initialize the neural network for improving the convergent speed and preventing weights trapping into local optima. To adapt the initially uncertain and varying parameters in the control system, the authors introduce an improved gradient descent method to adjust the network parameters. The stability of our controller is analyzed according to the Lyapunov method. The simulation of complex nonlinear multiple-input and multiple-output (MIMO) system is presented with strong coupling. Empirical results illustrate that the proposed controller can obtain good precisión with shorter time compared with the other considered methods. In the tenth paper, Castellano et al. introduce a multi-agent system that exploits positioning information coming from mobile devices to detect the occurrence of user′s situations related to social events. In the functional view of the system, the first level of information processing is managed by marking agents which leave marks in the environment in correspondence to the users′ positions. The accumulation of marks enables a stigmergic cooperation mechanism, generating short-term memory structures in the local environment. Information provided by such structures is granulated by event agents which associate a certainty degree with each event. Finally, an inference level, managed by situation agents, deduces user situations from the underlying events by exploiting fuzzy rules whose parameters are generated automatically by a neuro-fuzzy approach. Fuzziness allows the system to cope with the uncertainty of the events. In the architectural view of the system, the authors adopt semantic web standards to guarantee structural interoperability in an open application environment. The system has been tested on different real-world scenarios to show the effectiveness of the proposed approach. Chira et al. in the final paper modeled the real-world optimization problem of urban bicycles renting systems as a capacitated Vehicle Routing Problem (VRP) with multiple depots and the simultaneous need for pickup and delivery at each base station location. Evolutionary algorithms and ant colony systems are proposed and real data from the cities of Barcelona and Valencia is used for experimental simulations. We would like to thank our peer-reviewers for their diligent work and efficient efforts. We are also grateful to the Editor-in-Chief of Neuro-computing, Prof. Tom Heskes, for his continued support for the ISDA conference and for the opportunity to organize this Special issue

    New applications of Ambient Intelligence

    Get PDF
    Ambient Intelligence emerged more than two decades ago, with the exciting promise of technologically empowered environments that would be everywhere, cater to all our needs, be constantly available, know who we are and what we like, and allow us to make explicit requests using natural means instead of the traditional mouse and keyboard. At a time in which this technological unravelling was expected to have already happened, we still use the mouse and the keyboard. In this paper we make a brief analysis of why is this evolution taking more than initially expected. We then move on to analyse several di erent projects that are innovative, in the sense that they encompass elds of application that go beyond the initially envisioned, and show the diverse areas that AmI systems may potentially come to change.This work is part-funded by ERDF - European Regional Development Fund through the COMPETE Programme (operational programme for competitiveness) and by National Funds through the FCT - Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) within project FCOMP-01-0124-FEDER-028980 (PTDC/EEI-SII/1386/2012). This work is partfunded by National Funds through the FCT - Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) within projects PEst-OE/EEI/UI0752/2011

    CHARMM: The biomolecular simulation program

    Full text link
    CHARMM (Chemistry at HARvard Molecular Mechanics) is a highly versatile and widely used molecular simulation program. It has been developed over the last three decades with a primary focus on molecules of biological interest, including proteins, peptides, lipids, nucleic acids, carbohydrates, and small molecule ligands, as they occur in solution, crystals, and membrane environments. For the study of such systems, the program provides a large suite of computational tools that include numerous conformational and path sampling methods, free energy estimators, molecular minimization, dynamics, and analysis techniques, and model-building capabilities. The CHARMM program is applicable to problems involving a much broader class of many-particle systems. Calculations with CHARMM can be performed using a number of different energy functions and models, from mixed quantum mechanical-molecular mechanical force fields, to all-atom classical potential energy functions with explicit solvent and various boundary conditions, to implicit solvent and membrane models. The program has been ported to numerous platforms in both serial and parallel architectures. This article provides an overview of the program as it exists today with an emphasis on developments since the publication of the original CHARMM article in 1983. © 2009 Wiley Periodicals, Inc.J Comput Chem, 2009.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/63074/1/21287_ftp.pd

    Why Are Outcomes Different for Registry Patients Enrolled Prospectively and Retrospectively? Insights from the Global Anticoagulant Registry in the FIELD-Atrial Fibrillation (GARFIELD-AF).

    Get PDF
    Background: Retrospective and prospective observational studies are designed to reflect real-world evidence on clinical practice, but can yield conflicting results. The GARFIELD-AF Registry includes both methods of enrolment and allows analysis of differences in patient characteristics and outcomes that may result. Methods and Results: Patients with atrial fibrillation (AF) and ≥1 risk factor for stroke at diagnosis of AF were recruited either retrospectively (n = 5069) or prospectively (n = 5501) from 19 countries and then followed prospectively. The retrospectively enrolled cohort comprised patients with established AF (for a least 6, and up to 24 months before enrolment), who were identified retrospectively (and baseline and partial follow-up data were collected from the emedical records) and then followed prospectively between 0-18 months (such that the total time of follow-up was 24 months; data collection Dec-2009 and Oct-2010). In the prospectively enrolled cohort, patients with newly diagnosed AF (≤6 weeks after diagnosis) were recruited between Mar-2010 and Oct-2011 and were followed for 24 months after enrolment. Differences between the cohorts were observed in clinical characteristics, including type of AF, stroke prevention strategies, and event rates. More patients in the retrospectively identified cohort received vitamin K antagonists (62.1% vs. 53.2%) and fewer received non-vitamin K oral anticoagulants (1.8% vs . 4.2%). All-cause mortality rates per 100 person-years during the prospective follow-up (starting the first study visit up to 1 year) were significantly lower in the retrospective than prospectively identified cohort (3.04 [95% CI 2.51 to 3.67] vs . 4.05 [95% CI 3.53 to 4.63]; p = 0.016). Conclusions: Interpretations of data from registries that aim to evaluate the characteristics and outcomes of patients with AF must take account of differences in registry design and the impact of recall bias and survivorship bias that is incurred with retrospective enrolment. Clinical Trial Registration: - URL: http://www.clinicaltrials.gov . Unique identifier for GARFIELD-AF (NCT01090362)
    corecore