818 research outputs found
Massive Multi-Agent Data-Driven Simulations of the GitHub Ecosystem
Simulating and predicting planetary-scale techno-social systems poses heavy
computational and modeling challenges. The DARPA SocialSim program set the
challenge to model the evolution of GitHub, a large collaborative
software-development ecosystem, using massive multi-agent simulations. We
describe our best performing models and our agent-based simulation framework,
which we are currently extending to allow simulating other planetary-scale
techno-social systems. The challenge problem measured participant's ability,
given 30 months of meta-data on user activity on GitHub, to predict the next
months' activity as measured by a broad range of metrics applied to ground
truth, using agent-based simulation. The challenge required scaling to a
simulation of roughly 3 million agents producing a combined 30 million actions,
acting on 6 million repositories with commodity hardware. It was also important
to use the data optimally to predict the agent's next moves. We describe the
agent framework and the data analysis employed by one of the winning teams in
the challenge. Six different agent models were tested based on a variety of
machine learning and statistical methods. While no single method proved the
most accurate on every metric, the broadly most successful sampled from a
stationary probability distribution of actions and repositories for each agent.
Two reasons for the success of these agents were their use of a distinct
characterization of each agent, and that GitHub users change their behavior
relatively slowly
A Review of Platforms for the Development of Agent Systems
Agent-based computing is an active field of research with the goal of
building autonomous software of hardware entities. This task is often
facilitated by the use of dedicated, specialized frameworks. For almost thirty
years, many such agent platforms have been developed. Meanwhile, some of them
have been abandoned, others continue their development and new platforms are
released. This paper presents a up-to-date review of the existing agent
platforms and also a historical perspective of this domain. It aims to serve as
a reference point for people interested in developing agent systems. This work
details the main characteristics of the included agent platforms, together with
links to specific projects where they have been used. It distinguishes between
the active platforms and those no longer under development or with unclear
status. It also classifies the agent platforms as general purpose ones, free or
commercial, and specialized ones, which can be used for particular types of
applications.Comment: 40 pages, 2 figures, 9 tables, 83 reference
Ecosystem Simulator - Optimising Performance
The goal of this end of degree project is to develop an ecosystem simulator that can be used to
learn about how ecosystems work in an easy, visual and fun way. Studying any discipline is
tough, therefore this project aims to make this learning journey more easy-going and fun. The
ecosystem used in this project includes an environment with different biomes, trees, animals
and food, all of which influence each other.
Most similar projects use either dozens/hundreds of concurrent entities or thousands of simple
entities. This project also aims to bring simulators to the next level by running them with
thousands of not-simple entities at the same time with an agent-based system while keeping a
decent framerate. This project tries to make a balance between a great amount of entities and
an interesting entity behaviour.
Due to the fact that such a simulator can take years and multiple people to develop, this
specific project is a prototype that aims to show the great potential that this project has.
This simulator is done with Unity Technologies, Blender and C#. On one hand, Blender has
been used to build a 1x1 kilometers 3D island that is being used in the simulation. Unity and C#
are used to put everything together and program all the systems and AIs.
For this project, the used methodology is agile and feature-driven development, where short
iterations are made. On each iteration, a new feature is added and in the case that a bug was
found during the previous iteration, it is fixed during the current one
FinRL-Meta: Market Environments and Benchmarks for Data-Driven Financial Reinforcement Learning
Finance is a particularly difficult playground for deep reinforcement
learning. However, establishing high-quality market environments and benchmarks
for financial reinforcement learning is challenging due to three major factors,
namely, low signal-to-noise ratio of financial data, survivorship bias of
historical data, and model overfitting in the backtesting stage. In this paper,
we present an openly accessible FinRL-Meta library that has been actively
maintained by the AI4Finance community. First, following a DataOps paradigm, we
will provide hundreds of market environments through an automatic pipeline that
collects dynamic datasets from real-world markets and processes them into
gym-style market environments. Second, we reproduce popular papers as stepping
stones for users to design new trading strategies. We also deploy the library
on cloud platforms so that users can visualize their own results and assess the
relative performance via community-wise competitions. Third, FinRL-Meta
provides tens of Jupyter/Python demos organized into a curriculum and a
documentation website to serve the rapidly growing community. FinRL-Meta is
available at: https://github.com/AI4Finance-Foundation/FinRL-MetaComment: NeurIPS 2022 Datasets and Benchmarks. 36th Conference on Neural
Information Processing Systems Datasets and Benchmarks Trac
AGOCS – Accurate Google Cloud Simulator Framework
This paper presents the Accurate Google Cloud Simulator (AGOCS) – a novel high-fidelity Cloud workload simulator based on parsing real workload traces, which can be conveniently used on a desktop machine for day-to-day research. Our simulation is based on real-world workload traces from a Google Cluster with 12.5K nodes, over a period of a calendar month. The framework is able to reveal very precise and detailed parameters of the executed jobs, tasks and nodes as well as to provide actual resource usage statistics. The system has been implemented in Scala language with focus on parallel execution and an easy-to-extend design concept. The paper presents the detailed structural framework for AGOCS and discusses our main design decisions, whilst also suggesting alternative and possibly performance enhancing future approaches. The framework is available via the Open Source GitHub repository
Big data reference architecture for industry 4.0: including economic and ethical Implications
El rápido progreso de la Industria 4.0 se consigue gracias a las innovaciones en varios campos, por ejemplo, la fabricación, el big data y la inteligencia artificial. La tesis explica la necesidad de una arquitectura del Big Data para implementar la Inteligencia Artificial en la Industria 4.0 y presenta una arquitectura cognitiva para la inteligencia artificial - CAAI - como posible solución, que se adapta especialmente a los retos de las pequeñas y medianas empresas.
La tesis examina las implicaciones económicas y éticas de esas tecnologÃas y destaca tanto los beneficios como los retos para los paÃses, las empresas y los trabajadores individuales. El "Cuestionario de la Industria 4.0 para las PYME" se realizó para averiguar los requisitos y necesidades de las pequeñas y medianas empresas.
AsÃ, la nueva arquitectura de la CAAI presenta un modelo de diseño de software y proporciona un conjunto de bloques de construcción de código abierto para apoyar a las empresas durante la implementación. Diferentes casos de uso demuestran la aplicabilidad de la arquitectura y la siguiente evaluación verifica la funcionalidad de la misma.The rapid progress in Industry 4.0 is achieved through innovations in several fields, e.g., manufacturing, big data, and artificial intelligence. The thesis motivates the need for a Big Data architecture to apply artificial intelligence in Industry 4.0 and presents a cognitive architecture for artificial intelligence – CAAI – as a possible solution, which is especially suited for the challenges of small and medium-sized enterprises.
The work examines the economic and ethical implications of those technologies and highlights the benefits but also the challenges for countries, companies and individual workers. The "Industry 4.0 Questionnaire for SMEs" was conducted to gain insights into smaller and medium-sized companies’ requirements and needs.
Thus, the new CAAI architecture presents a software design blueprint and provides a set of open-source building blocks to support companies during implementation. Different use cases demonstrate the applicability of the architecture and the following evaluation verifies the functionality of the architecture
Dynamic Datasets and Market Environments for Financial Reinforcement Learning
The financial market is a particularly challenging playground for deep
reinforcement learning due to its unique feature of dynamic datasets. Building
high-quality market environments for training financial reinforcement learning
(FinRL) agents is difficult due to major factors such as the low
signal-to-noise ratio of financial data, survivorship bias of historical data,
and model overfitting. In this paper, we present FinRL-Meta, a data-centric and
openly accessible library that processes dynamic datasets from real-world
markets into gym-style market environments and has been actively maintained by
the AI4Finance community. First, following a DataOps paradigm, we provide
hundreds of market environments through an automatic data curation pipeline.
Second, we provide homegrown examples and reproduce popular research papers as
stepping stones for users to design new trading strategies. We also deploy the
library on cloud platforms so that users can visualize their own results and
assess the relative performance via community-wise competitions. Third, we
provide dozens of Jupyter/Python demos organized into a curriculum and a
documentation website to serve the rapidly growing community. The open-source
codes for the data curation pipeline are available at
https://github.com/AI4Finance-Foundation/FinRL-MetaComment: 49 pages, 15 figures. arXiv admin note: substantial text overlap with
arXiv:2211.0310
- …