16,111 research outputs found
Open Programming Language Interpreters
Context: This paper presents the concept of open programming language
interpreters and the implementation of a framework-level metaobject protocol
(MOP) to support them. Inquiry: We address the problem of dynamic interpreter
adaptation to tailor the interpreter's behavior on the task to be solved and to
introduce new features to fulfill unforeseen requirements. Many languages
provide a MOP that to some degree supports reflection. However, MOPs are
typically language-specific, their reflective functionality is often
restricted, and the adaptation and application logic are often mixed which
hardens the understanding and maintenance of the source code. Our system
overcomes these limitations. Approach: We designed and implemented a system to
support open programming language interpreters. The prototype implementation is
integrated in the Neverlang framework. The system exposes the structure,
behavior and the runtime state of any Neverlang-based interpreter with the
ability to modify it. Knowledge: Our system provides a complete control over
interpreter's structure, behavior and its runtime state. The approach is
applicable to every Neverlang-based interpreter. Adaptation code can
potentially be reused across different language implementations. Grounding:
Having a prototype implementation we focused on feasibility evaluation. The
paper shows that our approach well addresses problems commonly found in the
research literature. We have a demonstrative video and examples that illustrate
our approach on dynamic software adaptation, aspect-oriented programming,
debugging and context-aware interpreters. Importance: To our knowledge, our
paper presents the first reflective approach targeting a general framework for
language development. Our system provides full reflective support for free to
any Neverlang-based interpreter. We are not aware of any prior application of
open implementations to programming language interpreters in the sense defined
in this paper. Rather than substituting other approaches, we believe our system
can be used as a complementary technique in situations where other approaches
present serious limitations
Optimizing memory management for optimistic simulation with reinforcement learning
Simulation is a powerful technique to explore complex scenarios and analyze systems related to a wide range of disciplines. To allow for an efficient exploitation of the available computing power, speculative Time Warp-based Parallel Discrete Event Simulation is universally recognized as a viable solution. In this context, the rollback operation is a fundamental building block to support a correct execution even when causality inconsistencies are a posteriori materialized. If this operation is supported via checkpoint/restore strategies, memory management plays a fundamental role to ensure high performance of the simulation run. With few exceptions, adaptive protocols targeting memory management for Time Warp-based simulations have been mostly based on a pre-defined analytic models of the system, expressed as a closed-form functions that map system's state to control parameters. The underlying assumption is that the model itself is optimal. In this paper, we present an approach that exploits reinforcement learning techniques. Rather than assuming an optimal control strategy, we seek to find the optimal strategy through parameter exploration. A value function that captures the history of system feedback is used, and no a-priori knowledge of the system is required. An experimental assessment of the viability of our proposal is also provided for a mobile cellular system simulation
MonALISA : A Distributed Monitoring Service Architecture
The MonALISA (Monitoring Agents in A Large Integrated Services Architecture)
system provides a distributed monitoring service. MonALISA is based on a
scalable Dynamic Distributed Services Architecture which is designed to meet
the needs of physics collaborations for monitoring global Grid systems, and is
implemented using JINI/JAVA and WSDL/SOAP technologies. The scalability of the
system derives from the use of multithreaded Station Servers to host a variety
of loosely coupled self-describing dynamic services, the ability of each
service to register itself and then to be discovered and used by any other
services, or clients that require such information, and the ability of all
services and clients subscribing to a set of events (state changes) in the
system to be notified automatically. The framework integrates several existing
monitoring tools and procedures to collect parameters describing computational
nodes, applications and network performance. It has built-in SNMP support and
network-performance monitoring algorithms that enable it to monitor end-to-end
network performance as well as the performance and state of site facilities in
a Grid. MonALISA is currently running around the clock on the US CMS test Grid
as well as an increasing number of other sites. It is also being used to
monitor the performance and optimize the interconnections among the reflectors
in the VRVS system.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics
(CHEP03), La Jolla, Ca, USA, March 2003, 8 pages, pdf. PSN MOET00
Deep-Reinforcement Learning Multiple Access for Heterogeneous Wireless Networks
This paper investigates the use of deep reinforcement learning (DRL) in a MAC
protocol for heterogeneous wireless networking referred to as
Deep-reinforcement Learning Multiple Access (DLMA). The thrust of this work is
partially inspired by the vision of DARPA SC2, a 3-year competition whereby
competitors are to come up with a clean-slate design that "best share spectrum
with any network(s), in any environment, without prior knowledge, leveraging on
machine-learning technique". Specifically, this paper considers the problem of
sharing time slots among a multiple of time-slotted networks that adopt
different MAC protocols. One of the MAC protocols is DLMA. The other two are
TDMA and ALOHA. The nodes operating DLMA do not know that the other two MAC
protocols are TDMA and ALOHA. Yet, by a series of observations of the
environment, its own actions, and the resulting rewards, a DLMA node can learn
an optimal MAC strategy to coexist harmoniously with the TDMA and ALOHA nodes
according to a specified objective (e.g., the objective could be the sum
throughput of all networks, or a general alpha-fairness objective)
Engineering a QoS Provider Mechanism for Edge Computing with Deep Reinforcement Learning
With the development of new system solutions that integrate traditional cloud
computing with the edge/fog computing paradigm, dynamic optimization of service
execution has become a challenge due to the edge computing resources being more
distributed and dynamic. How to optimize the execution to provide Quality of
Service (QoS) in edge computing depends on both the system architecture and the
resource allocation algorithms in place. We design and develop a QoS provider
mechanism, as an integral component of a fog-to-cloud system, to work in
dynamic scenarios by using deep reinforcement learning. We choose reinforcement
learning since it is particularly well suited for solving problems in dynamic
and adaptive environments where the decision process needs to be frequently
updated. We specifically use a Deep Q-learning algorithm that optimizes QoS by
identifying and blocking devices that potentially cause service disruption due
to dynamicity. We compare the reinforcement learning based solution with
state-of-the-art heuristics that use telemetry data, and analyze pros and cons
- …