4,969 research outputs found

    Retrieval, reuse, revision and retention in case-based reasoning

    Get PDF
    El original está disponible en www.journals.cambridge.orgCase-based reasoning (CBR) is an approach to problem solving that emphasizes the role of prior experience during future problem solving (i.e., new problems are solved by reusing and if necessary adapting the solutions to similar problems that were solved in the past). It has enjoyed considerable success in a wide variety of problem solving tasks and domains. Following a brief overview of the traditional problem-solving cycle in CBR, we examine the cognitive science foundations of CBR and its relationship to analogical reasoning. We then review a representative selection of CBR research in the past few decades on aspects of retrieval, reuse, revision, and retention.Peer reviewe

    LUNAR: Cellular automata for drifting data streams

    Get PDF
    With the advent of fast data streams, real-time machine learning has become a challenging task, demanding many processing resources. In addition, they can be affected by the concept drift effect, by which learning methods have to detect changes in the data distribution and adapt to these evolving conditions. Several emerging paradigms such as the so-called Smart Dust, Utility Fog, or Swarm Robotics are in need for efficient and scalable solutions in real-time scenarios, and where usually computing resources are constrained. Cellular automata, as low-bias and robust-to-noise pattern recognition methods with competitive classification performance, meet the requirements imposed by the aforementioned paradigms mainly due to their simplicity and parallel nature. In this work we propose LUNAR, a streamified version of cellular automata devised to successfully meet the aforementioned requirements. LUNAR is able to act as a real incremental learner while adapting to drifting conditions. Furthermore, LUNAR is highly interpretable, as its cellular structure represents directly the mapping between the feature space and the labels to be predicted. Extensive simulations with synthetic and real data will provide evidence of its competitive behavior in terms of classification performance when compared to long-established and successful online learning methods

    Validating a neural network-based online adaptive system

    Get PDF
    Neural networks are popular models used for online adaptation to accommodate system faults and recuperate against environmental changes in real-time automation and control applications. However, the adaptivity limits the applicability of conventional verification and validation (V&V) techniques to such systems. We investigated the V&V of neural network-based online adaptive systems and developed a novel validation approach consisting of two important methods. (1) An independent novelty detector at the system input layer detects failure conditions and tracks abnormal events/data that may cause unstable learning behavior. (2) At the system output layer, we perform a validity check on the network predictions to validate its accommodation performance.;Our research focuses on the Intelligent Flight Control System (IFCS) for NASA F-15 aircraft as an example of online adaptive control application. We utilized Support Vector Data Description (SVDD), a one-class classifier to examine the data entering the adaptive component and detect potential failures. We developed a decompose and combine strategy to drastically reduce its computational cost, from O(n 3) down to O( n32 log n) such that the novelty detector becomes feasible in real-time.;We define a confidence measure, the validity index, to validate the predictions of the Dynamic Cell Structure (DCS) network in IFCS. The statistical information is collected during adaptation. The validity index is computed to reflect the trustworthiness associated with each neural network output. The computation of validity index in DCS is straightforward and efficient.;Through experimentation with IFCS, we demonstrate that: (1) the SVDD tool detects system failures accurately and provides validation inferences in a real-time manner; (2) the validity index effectively indicates poor fitting within regions characterized by sparse data and/or inadequate learning. The developed methods can be integrated with available online monitoring tools and further generalized to complete a promising validation framework for neural network based online adaptive systems

    Amputation effects on the underlying complexity within transtibial amputee ankle motion

    Get PDF
    The presence of chaos in walking is considered to provide a stable, yet adaptable means for locomotion. This study examined whether lower limb amputation and subsequent prosthetic rehabilitation resulted in a loss of complexity in amputee gait. Twenty-eight individuals with transtibial amputation participated in a 6 week, randomized cross-over design study in which they underwent a 3 week adaptation period to two separate prostheses. One prosthesis was deemed “more appropriate” and the other “less appropriate” based on matching/mismatching activity levels of the person and the prosthesis. Subjects performed a treadmill walking trial at self-selected walking speed at multiple points of the adaptation period, while kinematics of the ankle were recorded. Bilateral sagittal plane ankle motion was analyzed for underlying complexity through the pseudoperiodic surrogation analysis technique. Results revealed the presence of underlying deterministic structure in both prostheses and both the prosthetic and sound leg ankle (discriminant measure largest Lyapunov exponent). Results also revealed that the prosthetic ankle may be more likely to suffer loss of complexity than the sound ankle, and a “more appropriate” prosthesis may be better suited to help restore a healthy complexity of movement within the prosthetic ankle motion compared to a “less appropriate” prosthesis (discriminant measure sample entropy). Results from sample entropy results are less likely to be affected by the intracycle periodic dynamics as compared to the largest Lyapunov exponent. Adaptation does not seem to influence complexity in the system for experienced prosthesis users

    Chance and Necessity in Evolution: Lessons from RNA

    Full text link
    The relationship between sequences and secondary structures or shapes in RNA exhibits robust statistical properties summarized by three notions: (1) the notion of a typical shape (that among all sequences of fixed length certain shapes are realized much more frequently than others), (2) the notion of shape space covering (that all typical shapes are realized in a small neighborhood of any random sequence), and (3) the notion of a neutral network (that sequences folding into the same typical shape form networks that percolate through sequence space). Neutral networks loosen the requirements on the mutation rate for selection to remain effective. The original (genotypic) error threshold has to be reformulated in terms of a phenotypic error threshold. With regard to adaptation, neutrality has two seemingly contradictory effects: It acts as a buffer against mutations ensuring that a phenotype is preserved. Yet it is deeply enabling, because it permits evolutionary change to occur by allowing the sequence context to vary silently until a single point mutation can become phenotypically consequential. Neutrality also influences predictability of adaptive trajectories in seemingly contradictory ways. On the one hand it increases the uncertainty of their genotypic trace. At the same time neutrality structures the access from one shape to another, thereby inducing a topology among RNA shapes which permits a distinction between continuous and discontinuous shape transformations. To the extent that adaptive trajectories must undergo such transformations, their phenotypic trace becomes more predictable.Comment: 37 pages, 14 figures; 1998 CNLS conference; high quality figures at http://www.santafe.edu/~walte
    • …
    corecore