526,937 research outputs found
Gym-Ignition: Reproducible Robotic Simulations for Reinforcement Learning
This paper presents Gym-Ignition, a new framework to create reproducible
robotic environments for reinforcement learning research. It interfaces with
the new generation of Gazebo, part of the Ignition Robotics suite, which
provides three main improvements for reinforcement learning applications
compared to the alternatives: 1) the modular architecture enables using the
simulator as a C++ library, simplifying the interconnection with external
software; 2) multiple physics and rendering engines are supported as plugins,
simplifying their selection during the execution; 3) the new distributed
simulation capability allows simulating complex scenarios while sharing the
load on multiple workers and machines. The core of Gym-Ignition is a component
that contains the Ignition Gazebo simulator and exposes a simple interface for
its configuration and execution. We provide a Python package that allows
developers to create robotic environments simulated in Ignition Gazebo.
Environments expose the common OpenAI Gym interface, making them compatible
out-of-the-box with third-party frameworks containing reinforcement learning
algorithms. Simulations can be executed in both headless and GUI mode, the
physics engine can run in accelerated mode, and instances can be parallelized.
Furthermore, the Gym-Ignition software architecture provides abstraction of the
Robot and the Task, making environments agnostic on the specific runtime. This
abstraction allows their execution also in a real-time setting on actual
robotic platforms, even if driven by different middlewares.Comment: Accepted in SII202
IN2GESOFT: Innovation and Integration of Methods for the Development and Quantitative Management of Software Projects TIN2004-06689-C03
This coordinated project intends to introduce new methods in software engineering
project management, integrating different quantitative and qualitative technologies in the
management processes. The underlying goal to all three subprojects participants is the
generation of information adapted for the efficient performance in the directing of the
project. The topics that are investigated are related to the capture of decisions in dynam ical environments and complex systems, software testing and the analysis of the manage ment strategies for the process assessment of the software in its different phases of the
production.
The project sets up a methodological, conceptual framework and supporting tools that
facilitate the decision making in the software project management. This allows us to eval uate the risk and uncertainty associated to different alternatives of management before
leading them to action. Thus, it is necessary to define a taxonomy of software models
so that they reflect the current reality of the projects. Since the software testing is one
of the most critical and costly processes directed to guarantee the quality and reliability
of the software, we undertake the research on the automation of the process of software
testing by means of the development of new technologies test case generation, mainly
based in metaheuristic and model checking techniques in the domains of database and
internet applications. The software system developed will allow the integration of these
technologies, and the management information needed, from the first phases of the cycle
of life in the construction of a software product up to the last ones such as regression tests
and maintenance.
The set of technologies that we investigate include the use of statistical analysis and of
experimental design for obtaining metrics in the phase of analysis, the application of the bayesian nets to the decision processes, the application of the standards of process eval uation and quality models, the utilization of metaheuristics algorithms and technologies
of prediction to optimize resources, the technologies of visualization to construct control
dashboards, hybrid models for the simulation of processes and others
FLAG3D: A 3D Fitness Activity Dataset with Language Instruction
With the continuously thriving popularity around the world, fitness activity
analytic has become an emerging research topic in computer vision. While a
variety of new tasks and algorithms have been proposed recently, there are
growing hunger for data resources involved in high-quality data, fine-grained
labels, and diverse environments. In this paper, we present FLAG3D, a
large-scale 3D fitness activity dataset with language instruction containing
180K sequences of 60 categories. FLAG3D features the following three aspects:
1) accurate and dense 3D human pose captured from advanced MoCap system to
handle the complex activity and large movement, 2) detailed and professional
language instruction to describe how to perform a specific activity, 3)
versatile video resources from a high-tech MoCap system, rendering software,
and cost-effective smartphones in natural environments. Extensive experiments
and in-depth analysis show that FLAG3D contributes great research value for
various challenges, such as cross-domain human action recognition, dynamic
human mesh recovery, and language-guided human action generation. Our dataset
and source code will be publicly available at
https://andytang15.github.io/FLAG3D
HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges
High Performance Computing (HPC) clouds are becoming an alternative to
on-premise clusters for executing scientific applications and business
analytics services. Most research efforts in HPC cloud aim to understand the
cost-benefit of moving resource-intensive applications from on-premise
environments to public cloud platforms. Industry trends show hybrid
environments are the natural path to get the best of the on-premise and cloud
resources---steady (and sensitive) workloads can run on on-premise resources
and peak demand can leverage remote resources in a pay-as-you-go manner.
Nevertheless, there are plenty of questions to be answered in HPC cloud, which
range from how to extract the best performance of an unknown underlying
platform to what services are essential to make its usage easier. Moreover, the
discussion on the right pricing and contractual models to fit small and large
users is relevant for the sustainability of HPC clouds. This paper brings a
survey and taxonomy of efforts in HPC cloud and a vision on what we believe is
ahead of us, including a set of research challenges that, once tackled, can
help advance businesses and scientific discoveries. This becomes particularly
relevant due to the fast increasing wave of new HPC applications coming from
big data and artificial intelligence.Comment: 29 pages, 5 figures, Published in ACM Computing Surveys (CSUR
Modeling, Simulation and Emulation of Intelligent Domotic Environments
Intelligent Domotic Environments are a promising approach, based on semantic models and commercially off-the-shelf domotic technologies, to realize new intelligent buildings, but such complexity requires innovative design methodologies and tools for ensuring correctness. Suitable simulation and emulation approaches and tools must be adopted to allow designers to experiment with their ideas and to incrementally verify designed policies in a scenario where the environment is partly emulated and partly composed of real devices. This paper describes a framework, which exploits UML2.0 state diagrams for automatic generation of device simulators from ontology-based descriptions of domotic environments. The DogSim simulator may simulate a complete building automation system in software, or may be integrated in the Dog Gateway, allowing partial simulation of virtual devices alongside with real devices. Experiments on a real home show that the approach is feasible and can easily address both simulation and emulation requirement
Model Exploration Using OpenMOLE - a workflow engine for large scale distributed design of experiments and parameter tuning
OpenMOLE is a scientific workflow engine with a strong emphasis on workload
distribution. Workflows are designed using a high level Domain Specific
Language (DSL) built on top of Scala. It exposes natural parallelism constructs
to easily delegate the workload resulting from a workflow to a wide range of
distributed computing environments. In this work, we briefly expose the strong
assets of OpenMOLE and demonstrate its efficiency at exploring the parameter
set of an agent simulation model. We perform a multi-objective optimisation on
this model using computationally expensive Genetic Algorithms (GA). OpenMOLE
hides the complexity of designing such an experiment thanks to its DSL, and
transparently distributes the optimisation process. The example shows how an
initialisation of the GA with a population of 200,000 individuals can be
evaluated in one hour on the European Grid Infrastructure.Comment: IEEE High Performance Computing and Simulation conference 2015, Jun
2015, Amsterdam, Netherland
Generating collaborative systems for digital libraries: A model-driven approach
This is an open access article shared under a Creative Commons Attribution 3.0 Licence (http://creativecommons.org/licenses/by/3.0/). Copyright @ 2010 The Authors.The design and development of a digital library involves different stakeholders, such as: information architects, librarians, and domain experts, who need to agree on a common language to describe, discuss, and negotiate the services the library has to offer. To this end, high-level, language-neutral models have to be devised. Metamodeling techniques favor the definition of domainspecific visual languages through which stakeholders can share their views and directly manipulate representations of the domain entities. This paper describes CRADLE (Cooperative-Relational Approach to Digital Library Environments), a metamodel-based framework and visual language for the definition of notions and services related to the development of digital libraries. A collection of tools allows the automatic generation of several services, defined with the CRADLE visual language, and of the graphical user interfaces providing access to them for the final user. The effectiveness of the approach is illustrated by presenting digital libraries generated with CRADLE, while the CRADLE environment has been evaluated by using the cognitive dimensions framework
SANTO: Social Aerial NavigaTion in Outdoors
In recent years, the advances in remote connectivity, miniaturization of electronic components and computing power has led to the integration of these technologies in daily devices like cars or aerial vehicles. From these, a consumer-grade option that has gained popularity are the drones or unmanned aerial vehicles, namely quadrotors. Although until recently they have not been used for commercial applications, their inherent potential for a number of tasks where small and intelligent devices are needed is huge. However, although the integrated hardware has advanced exponentially, the refinement of software used for these applications has not beet yet exploited enough. Recently, this shift is visible in the improvement of common tasks in the field of robotics, such as object tracking or autonomous navigation. Moreover, these challenges can become bigger when taking into account the dynamic nature of the real world, where the insight about the current environment is constantly changing. These settings are considered in the improvement of robot-human interaction, where the potential use of these devices is clear, and algorithms are being developed to improve this situation. By the use of the latest advances in artificial intelligence, the human brain behavior is simulated by the so-called neural networks, in such a way that computing system performs as similar as possible as the human behavior. To this end, the system does learn by error which, in an akin way to the human learning, requires a set of previous experiences quite considerable, in order for the algorithm to retain the manners. Applying these technologies to robot-human interaction do narrow the gap. Even so, from a bird's eye, a noticeable time slot used for the application of these technologies is required for the curation of a high-quality dataset, in order to ensure that the learning process is optimal and no wrong actions are retained. Therefore, it is essential to have a development platform in place to ensure these principles are enforced throughout the whole process of creation and optimization of the algorithm. In this work, multiple already-existing handicaps found in pipelines of this computational gauge are exposed, approaching each of them in a independent and simple manner, in such a way that the solutions proposed can be leveraged by the maximum number of workflows. On one side, this project concentrates on reducing the number of bugs introduced by flawed data, as to help the researchers to focus on developing more sophisticated models. On the other side, the shortage of integrated development systems for this kind of pipelines is envisaged, and with special care those using simulated or controlled environments, with the goal of easing the continuous iteration of these pipelines.Thanks to the increasing popularity of drones, the research and development of autonomous capibilities has become easier. However, due to the challenge of integrating multiple technologies, the available software stack to engage this task is restricted. In this thesis, we accent the divergencies among unmanned-aerial-vehicle simulators and propose a platform to allow faster and in-depth prototyping of machine learning algorithms for this drones
Next Generation Cloud Computing: New Trends and Research Directions
The landscape of cloud computing has significantly changed over the last
decade. Not only have more providers and service offerings crowded the space,
but also cloud infrastructure that was traditionally limited to single provider
data centers is now evolving. In this paper, we firstly discuss the changing
cloud infrastructure and consider the use of infrastructure from multiple
providers and the benefit of decentralising computing away from data centers.
These trends have resulted in the need for a variety of new computing
architectures that will be offered by future cloud infrastructure. These
architectures are anticipated to impact areas, such as connecting people and
devices, data-intensive computing, the service space and self-learning systems.
Finally, we lay out a roadmap of challenges that will need to be addressed for
realising the potential of next generation cloud systems.Comment: Accepted to Future Generation Computer Systems, 07 September 201
- …