149 research outputs found
Constructing Complexity-efficient Features in XCS with Tree-based Rule Conditions
A major goal of machine learning is to create techniques that abstract away
irrelevant information. The generalisation property of standard Learning
Classifier System (LCS) removes such information at the feature level but not
at the feature interaction level. Code Fragments (CFs), a form of tree-based
programs, introduced feature manipulation to discover important interactions,
but they often contain irrelevant information, which causes structural
inefficiency. XOF is a recently introduced LCS that uses CFs to encode building
blocks of knowledge about feature interaction. This paper aims to optimise the
structural efficiency of CFs in XOF. We propose two measures to improve
constructing CFs to achieve this goal. Firstly, a new CF-fitness update
estimates the applicability of CFs that also considers the structural
complexity. The second measure we can use is a niche-based method of generating
CFs. These approaches were tested on Even-parity and Hierarchical problems,
which require highly complex combinations of input features to capture the data
patterns. The results show that the proposed methods significantly increase the
structural efficiency of CFs, which is estimated by the rule "generality rate".
This results in faster learning performance in the Hierarchical Majority-on
problem. Furthermore, a user-set depth limit for CF generation is not needed as
the learning agent will not adopt higher-level CFs once optimal CFs are
constructed
Trajectory Tracking Control of Skid-Steering Mobile Robots with Slip and Skid Compensation using Sliding-Mode Control and Deep Learning
Slip and skid compensation is crucial for mobile robots' navigation in
outdoor environments and uneven terrains. In addition to the general slipping
and skidding hazards for mobile robots in outdoor environments, slip and skid
cause uncertainty for the trajectory tracking system and put the validity of
stability analysis at risk. Despite research in this field, having a real-world
feasible online slip and skid compensation is still challenging due to the
complexity of wheel-terrain interaction in outdoor environments. This paper
presents a novel trajectory tracking technique with real-world feasible online
slip and skid compensation at the vehicle-level for skid-steering mobile robots
in outdoor environments. The sliding mode control technique is utilized to
design a robust trajectory tracking system to be able to consider the parameter
uncertainty of this type of robot. Two previously developed deep learning
models [1], [2] are integrated into the control feedback loop to estimate the
robot's slipping and undesired skidding and feed the compensator in a real-time
manner. The main advantages of the proposed technique are (1) considering two
slip-related parameters rather than the conventional three slip parameters at
the wheel-level, and (2) having an online real-world feasible slip and skid
compensator to be able to reduce the tracking errors in unforeseen
environments. The experimental results show that the proposed controller with
the slip and skid compensator improves the performance of the trajectory
tracking system by more than 27%
New Fitness Functions in Binary Particle Swarm Optimisation for Feature Selection
Abstract-Feature selection is an important data preprocessing technique in classification problems. This paper proposes two new fitness functions in binary particle swarm optimisation (BPSO) for feature selection to choose a small number of features and achieve high classification accuracy. In the first fitness function, the relative importance of classification performance and the number of features are balanced by using a linearly increasing weight in the evolutionary process. The second is a two-stage fitness function, where classification performance is optimised in the first stage and the number of features is taken into account in the second stage. K-nearest neighbour (KNN) is employed to evaluate the classification performance in the experiments on ten datasets. Experimental results show that by using either of the two proposed fitness functions in the training process, in almost all cases, BPSO can select a smaller number of features and achieve higher classification accuracy on the test sets than using overall classification performance as the fitness function. They outperform two conventional feature selection methods in almost all cases. In most cases, BPSO with the second fitness function can achieve better performance than with the first fitness function in terms of classification accuracy and the number of features
Searching for VHE gamma-ray emission associated with IceCube neutrino alerts using FACT, H.E.S.S., MAGIC, and VERITAS
The realtime follow-up of neutrino events is a promising approach to searchfor astrophysical neutrino sources. It has so far provided compelling evidencefor a neutrino point source: the flaring gamma-ray blazar TXS 0506+056 observedin coincidence with the high-energy neutrino IceCube-170922A detected byIceCube. The detection of very-high-energy gamma rays (VHE, ) from this source helped establish the coincidence andconstrained the modeling of the blazar emission at the time of the IceCubeevent. The four major imaging atmospheric Cherenkov telescope arrays (IACTs) -FACT, H.E.S.S., MAGIC, and VERITAS - operate an active follow-up program oftarget-of-opportunity observations of neutrino alerts sent by IceCube. Thisprogram has two main components. One are the observations of known gamma-raysources around which a cluster of candidate neutrino events has been identifiedby IceCube (Gamma-ray Follow-Up, GFU). Second one is the follow-up of singlehigh-energy neutrino candidate events of potential astrophysical origin such asIceCube-170922A. GFU has been recently upgraded by IceCube in collaborationwith the IACT groups. We present here recent results from the IACT follow-upprograms of IceCube neutrino alerts and a description of the upgraded IceCubeGFU system.<br
The insect, Galleria mellonella, is a compatible model for evaluating the toxicology of okadaic acid
The polyether toxin, okadaic acid, causes diarrhetic shellfish poisoning in humans. Despite extensive research into its cellular targets using rodent models, we know little about its putative effect(s) on innate immunity. We inoculated larvae of the greater waxmoth, Galleria mellonella, with physiologically relevant doses of okadaic acid by direct injection into the haemocoel (body cavity) and/or gavage (force-feeding). We monitored larval survival and employed a range of cellular and biochemical assays to assess the potential harmful effects of okadaic acid. Okadaic acid at concentrations >75 ng/larva (>242 µg/kg) led to significant reductions in larval survival (>65%) and circulating haemocyte (blood cell) numbers (>50%) within 24 h post-inoculation. In the haemolymph, okadaic acid reduced haemocyte viability and increased phenoloxidase activities. In the midgut, okadaic acid induced oxidative damage as determined by increases in superoxide dismutase activity and levels of malondialdehyde (i.e., lipid peroxidation). Our observations of insect larvae correspond broadly to data published using rodent models of shellfish poisoning toxidrome, including complementary LD50 values; 206–242 μg/kg in mice, ~239 μg/kg in G. mellonella. These data support the use of this insect as a surrogate model for the investigation of marine toxins, which offers distinct ethical and financial incentives
Code fragments:Past and future use in transfer learning
Code Fragments (CFs) have existed as an extension to Evolutionary Computation, specifically Learning Classifiers Systems (LCSs), for half a decade. Through the scaling, abstraction and reuse of both knowledge and functionality that CFs enable, interesting problems have been solved beyond the capability of any other technique. This paper traces the development of the different CF-based systems and outlines future research directions that will form the basis for advanced Transfer Learning in LCSs.</p
Learning Optimality Theory for Accuracy-Based Learning Classifier Systems
Evolutionary computation has brought great progress to rule-based learning but this progress is often blind to the optimality of the system design. This article theoretically reveals an optimal learning scheme on the most popular evolutionary rule-based learning approach-the accuracy-based classifier system (or XCS). XCS seeks to form accurate, maximally general rules that together classify the state space of a given domain. Previously, setting up the system to perform well has been a 'blackart' as no systematic approach to XCS parameter tuning existed. We derive a theoretical approach that mathematically guarantees that XCS identifies the accurate rules, which also returns a theoretically valid XCS parameter setting. Then, we demonstrate our theoretical setting derives the maximum correctness of rule-identification in the fewest iterations possible. We also experimentally show that our theoretical setting enables XCS to easily solve several challenging problems where it had previously struggled. </p
Integration of Learning Classifier Systems with simultaneous localisation and mapping for autonomous robotics
A cognitive mobile robot must be able to autonomously solve the three complex problems of navigating: where it is, where it is going and how it is going to get there. The first is addressed by techniques for simultaneous localization and mapping (SLAM). The next stage of navigating is to plan a path to a goal, which is often achieved by learning techniques due to the scale of search required. Commonly, the localisation and mapping stage is separated from path planning stage, with the function not of interest being considered ideal in order to simplify the problem (similarly, the goal is often predetermined by an external agent, such as a human operator specifying a location to reach). This work integrates the planning with the localisation and mapping in order to investigate the benefits of considering these aspects together (rather than as a separate functions as is often assumed). Firstly, experiments on real-robots show decreased localisation error in this approach (1.8 mm ±0.41 mm to 1.2 mm ±0.26 mm). Secondly, the number of steps to goal has concurrently been reduced (13.4 steps to 11.8 steps). This work is novel in the integration of evolutionary computation planning techniques with SLAM. It also has enabled the opportunity for rule-sharing between heterogeneous robots and the inclusion of action policies in SLAM filter updates.</p
- …