2,854 research outputs found
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
Patients with basal ganglia damage show preserved learning in an economic game.
Both basal ganglia (BG) and orbitofrontal cortex (OFC) have been widely implicated in social and non-social decision-making. However, unlike OFC damage, BG pathology is not typically associated with disturbances in social functioning. Here we studied the behavior of patients with focal lesions to either BG or OFC in a multi-strategy competitive game known to engage these regions. We find that whereas OFC patients are significantly impaired, BG patients show intact learning in the economic game. By contrast, when information about the strategic context is absent, both cohorts are significantly impaired. Computational modeling further shows a preserved ability in BG patients to learn by anticipating and responding to the behavior of others using the strategic context. These results suggest that apparently divergent findings on BG contribution to social decision-making may instead reflect a model where higher-order learning processes are dissociable from trial-and-error learning, and can be preserved despite BG damage
The role of learning on industrial simulation design and analysis
The capability of modeling real-world system operations has turned simulation into an indispensable problemsolving methodology for business system design and analysis. Today, simulation supports decisions ranging
from sourcing to operations to finance, starting at the strategic level and proceeding towards tactical and
operational levels of decision-making. In such a dynamic setting, the practice of simulation goes beyond
being a static problem-solving exercise and requires integration with learning. This article discusses the role
of learning in simulation design and analysis motivated by the needs of industrial problems and describes
how selected tools of statistical learning can be utilized for this purpose
Exploiting Prior Knowledge in Robot Motion Skills Learning
This thesis presents a new robot learning framework, its application to exploit prior knowledge by encoding movement primitives in the form of a novel motion library, and the transfer of such knowledge to other robotic platforms in the form of shared latent spaces.
In robot learning, it is often desirable to have robots that learn and acquire new skills rapidly. However, existing methods are specific to a certain task defined by the user, as well as time consuming to train. This includes for instance end-to-end models that can require a substantial amount of time to learn a certain skill. Such methods often start with no prior knowledge or little, and move slowly from erratic movements to the specific required motion. This is very different from how animals and humans learn motion skills. For instance, zebras in the African Savannah can learn to walk in few minutes just after being born. This suggests that some kind of prior knowledge is encoded into them. Leveraging this information may help improve and accelerate the learning and generation of new skills. These observations raise questions such as: how would this prior knowledge be represented? And how much would it help the learning process? Additionally, once learned, these models often do not transfer well to other robotic platforms requiring to teach to each other robot the same skills. This significantly increases the total training time and render the demonstration phase a tedious process. Would it be possible instead to exploit this prior knowledge to accelerate the learning process of new skills by transferring it to other robots? These are some of the questions that we are interested to investigate in this thesis. However, before examining these questions, a practical tool that allows one to easily test ideas in robot learning is needed. This tool would have to be easy-to-use, intuitive, generic, modular, and would need to let the user easily implement different ideas and compare different models/algorithms. Once implemented, we would then be able to focus on our original questions
Breadcrumbs to the Goal: Goal-Conditioned Exploration from Human-in-the-Loop Feedback
Exploration and reward specification are fundamental and intertwined
challenges for reinforcement learning. Solving sequential decision-making tasks
requiring expansive exploration requires either careful design of reward
functions or the use of novelty-seeking exploration bonuses. Human supervisors
can provide effective guidance in the loop to direct the exploration process,
but prior methods to leverage this guidance require constant synchronous
high-quality human feedback, which is expensive and impractical to obtain. In
this work, we present a technique called Human Guided Exploration (HuGE), which
uses low-quality feedback from non-expert users that may be sporadic,
asynchronous, and noisy. HuGE guides exploration for reinforcement learning not
only in simulation but also in the real world, all without meticulous reward
specification. The key concept involves bifurcating human feedback and policy
learning: human feedback steers exploration, while self-supervised learning
from the exploration data yields unbiased policies. This procedure can leverage
noisy, asynchronous human feedback to learn policies with no hand-crafted
reward design or exploration bonuses. HuGE is able to learn a variety of
challenging multi-stage robotic navigation and manipulation tasks in simulation
using crowdsourced feedback from non-expert users. Moreover, this paradigm can
be scaled to learning directly on real-world robots, using occasional,
asynchronous feedback from human supervisors
- …