1,188 research outputs found
Incremental construction of LSTM recurrent neural network
Long Short--Term Memory (LSTM) is a recurrent neural network that
uses structures called memory blocks to allow the net remember
significant events distant in the past input sequence in order to
solve long time lag tasks, where other RNN approaches fail.
Throughout this work we have performed experiments using LSTM
networks extended with growing abilities, which we call GLSTM.
Four methods of training growing LSTM has been compared. These
methods include cascade and fully connected hidden layers as well
as two different levels of freezing previous weights in the
cascade case. GLSTM has been applied to a forecasting problem in a biomedical domain, where the input/output behavior of five
controllers of the Central Nervous System control has to be
modelled. We have compared growing LSTM results against other
neural networks approaches, and our work applying conventional
LSTM to the task at hand.Postprint (published version
Evolution of the layers in a subsumption architecture robot controller
An approach to robotics called layered evolution and merging features from the subsumption architecture into evolutionary robotics is presented, its advantages and its relevance for science and engineering are discussed. This approach is used to construct a layered controller for a simulated robot that learns which light source to approach in an environment with obstacles. The evolvability and performance of layered evolution on this task is compared to (standard) monolithic evolution, incremental and modularised evolution. To test the optimality of the evolved solutions the evolved controller is merged back into a single network. On the grounds of the test results, it is argued that layered evolution provides a superior approach for many tasks, and future research projects involving this approach are suggested
Optimization of Interplanetary Rendezvous Trajectories for Solar Sailcraft Using a Neurocontroller
As for all low-thrust spacecraft, finding optimal solar sailcraft trajectories is a difficult and time-consuming task that involves a lot of experience and expert knowledge, since the convergence behavior of optimizers that are based on numerical optimal control methods depends strongly on an adequate initial guess, which is often hard to find. Even if the op-timizer converges to an ”optimal trajectory”, this trajectory is typically close to the initial guess that is rarely close to the global optimum. This paper demonstrates, that artificial neural networks in combination with evolutionary algorithms can be applied successfully for optimal solar sailcraft steering. Since these evolutionary neurocontrollers explore the trajectory search space more exhaustively than a human expert can do by using tradi-tional optimal control methods, they are able to find steering strategies that generate better trajectories, which are closer to the global optimum. Results are presented for a Near Earth Asteroid rendezvous mission and for a Mercury rendezvous mission
Hysteresis Modeling in Iron-Dominated Magnets Based on a Multi-Layered Narx Neural Network Approach
A full-fledged neural network modeling, based on a Multi-layered Nonlinear Autoregressive Exogenous Neural Network (NARX) architecture, is proposed for quasi-static and dynamic hysteresis loops, one of the most challenging topics for computational magnetism. This modeling approach overcomes drawbacks in attaining better than percent-level accuracy of classical and recent approaches for accelerator magnets, that combine hybridization of standard hysteretic models and neural network architectures. By means of an incremental procedure, different Deep Neural Network Architectures are selected, fine-tuned and tested in order to predict magnetic hysteresis in the context of electromagnets. Tests and results show that the proposed NARX architecture best fits the measured magnetic field behavior of a reference quadrupole at CERN. In particular, the proposed modeling framework leads to a percent error below 0.02% for the magnetic field prediction, thus outperforming state of the art approaches and paving a very promising way for future real time applications
Energy management system for biological 3D printing by the refinement of manifold model morphing in flexible grasping space
The use of 3D printing, or additive manufacturing, has gained significant
attention in recent years due to its potential for revolutionizing traditional
manufacturing processes. One key challenge in 3D printing is managing energy
consumption, as it directly impacts the cost, efficiency, and sustainability of
the process. In this paper, we propose an energy management system that
leverages the refinement of manifold model morphing in a flexible grasping
space, to reduce costs for biological 3D printing. The manifold model is a
mathematical representation of the 3D object to be printed, and the refinement
process involves optimizing the morphing parameters of the manifold model to
achieve desired printing outcomes. To enable flexibility in the grasping space,
we incorporate data-driven approaches, such as machine learning and data
augmentation techniques, to enhance the accuracy and robustness of the energy
management system. Our proposed system addresses the challenges of limited
sample data and complex morphologies of manifold models in layered additive
manufacturing. Our method is more applicable for soft robotics and
biomechanisms. We evaluate the performance of our system through extensive
experiments and demonstrate its effectiveness in predicting and managing energy
consumption in 3D printing processes. The results highlight the importance of
refining manifold model morphing in the flexible grasping space for achieving
energy-efficient 3D printing, contributing to the advancement of green and
sustainable manufacturing practices.Comment: 33 pages, 10 figures, Journa
AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments
This report considers the application of Articial Intelligence (AI) techniques to
the problem of misuse detection and misuse localisation within telecommunications
environments. A broad survey of techniques is provided, that covers inter alia
rule based systems, model-based systems, case based reasoning, pattern matching,
clustering and feature extraction, articial neural networks, genetic algorithms, arti
cial immune systems, agent based systems, data mining and a variety of hybrid
approaches. The report then considers the central issue of event correlation, that
is at the heart of many misuse detection and localisation systems. The notion of
being able to infer misuse by the correlation of individual temporally distributed
events within a multiple data stream environment is explored, and a range of techniques,
covering model based approaches, `programmed' AI and machine learning
paradigms. It is found that, in general, correlation is best achieved via rule based approaches,
but that these suffer from a number of drawbacks, such as the difculty of
developing and maintaining an appropriate knowledge base, and the lack of ability
to generalise from known misuses to new unseen misuses. Two distinct approaches
are evident. One attempts to encode knowledge of known misuses, typically within
rules, and use this to screen events. This approach cannot generally detect misuses
for which it has not been programmed, i.e. it is prone to issuing false negatives.
The other attempts to `learn' the features of event patterns that constitute normal
behaviour, and, by observing patterns that do not match expected behaviour, detect
when a misuse has occurred. This approach is prone to issuing false positives,
i.e. inferring misuse from innocent patterns of behaviour that the system was not
trained to recognise. Contemporary approaches are seen to favour hybridisation,
often combining detection or localisation mechanisms for both abnormal and normal
behaviour, the former to capture known cases of misuse, the latter to capture
unknown cases. In some systems, these mechanisms even work together to update
each other to increase detection rates and lower false positive rates. It is concluded
that hybridisation offers the most promising future direction, but that a rule or state
based component is likely to remain, being the most natural approach to the correlation
of complex events. The challenge, then, is to mitigate the weaknesses of
canonical programmed systems such that learning, generalisation and adaptation
are more readily facilitated
- …