3,992 research outputs found
Engineering the Hardware/Software Interface for Robotic Platforms - A Comparison of Applied Model Checking with Prolog and Alloy
Robotic platforms serve different use cases ranging from experiments for
prototyping assistive applications up to embedded systems for realizing
cyber-physical systems in various domains. We are using 1:10 scale miniature
vehicles as a robotic platform to conduct research in the domain of
self-driving cars and collaborative vehicle fleets. Thus, experiments with
different sensors like e.g.~ultra-sonic, infrared, and rotary encoders need to
be prepared and realized using our vehicle platform. For each setup, we need to
configure the hardware/software interface board to handle all sensors and
actors. Therefore, we need to find a specific configuration setting for each
pin of the interface board that can handle our current hardware setup but which
is also flexible enough to support further sensors or actors for future use
cases. In this paper, we show how to model the domain of the configuration
space for a hardware/software interface board to enable model checking for
solving the tasks of finding any, all, and the best possible pin configuration.
We present results from a formal experiment applying the declarative languages
Alloy and Prolog to guide the process of engineering the hardware/software
interface for robotic platforms on the example of a configuration complexity up
to ten pins resulting in a configuration space greater than 14.5 million
possibilities. Our results show that our domain model in Alloy performs better
compared to Prolog to find feasible solutions for larger configurations with an
average time of 0.58s. To find the best solution, our model for Prolog performs
better taking only 1.38s for the largest desired configuration; however, this
important use case is currently not covered by the existing tools for the
hardware used as an example in this article.Comment: Presented at DSLRob 2013 (arXiv:cs/1312.5952
Optimal dynamic operations scheduling for small-scale satellites
A satellite's operations schedule is crafted based on each subsystem/payload operational needs, while taking into account the available resources on-board. A number of operating modes are carefully designed, each one with a different operations plan that can serve emergency cases, reduced functionality cases, the nominal case, the end of mission case and so on. During the mission span, should any operations planning amendments arise, a new schedule needs to be manually developed and uplinked to the satellite during a communications' window. The current operations planning techniques over a reduced number of solutions while approaching operations scheduling in a rigid manner. Given the complexity of a satellite as a system as well as the numerous restrictions and uncertainties imposed by both environmental and technical parameters, optimising the operations scheduling in an automated fashion can over a flexible approach while enhancing the mission robustness. In this paper we present Opt-OS (Optimised Operations Scheduler), a tool loosely based on the Ant Colony System algorithm, which can solve the Dynamic Operations Scheduling Problem (DOSP). The DOSP is treated as a single-objective multiple constraint discrete optimisation problem, where the objective is to maximise the useful operation time per subsystem on-board while respecting a set of constraints such as the feasible operation timeslot per payload or maintaining the power consumption below a specific threshold. Given basic mission inputs such as the Keplerian elements of the satellite's orbit, its launch date as well as the individual subsystems' power consumption and useful operation periods, Opt-OS outputs the optimal ON/OFF state per subsystem per orbital time step, keeping each subsystem's useful operation time to a maximum while ensuring that constraints such as the power availability threshold are never violated. Opt-OS can provide the flexibility needed for designing an optimal operations schedule on the spot throughout any mission phase as well as the ability to automatically schedule operations in case of emergency. Furthermore, Opt-OS can be used in conjunction with multi-objective optimisation tools for performing full system optimisation. Based on the optimal operations schedule, subsystem design parameters are being optimised in order to achieve the maximal usage of the satellite while keeping its mass minimal
Multi-agent Adaptive Architecture for Flexible Distributed Real-time Systems
Recent critical embedded systems become more and more complex and usually react to their environment that requires to amend their behaviors by applying run-time reconfiguration scenarios. A system is defined in this paper as a set of networked devices, where each of which has its own operating system, a processor to execute related periodic software tasks, and a local battery. A reconfiguration is any operation allowing the addition-removal-update of tasks to adapt the device and the whole system to its environment. It may be a reaction to a fault or even optimization of the system functional behavior. Nevertheless, such scenario can cause the violation of real-time or energy constraints, which is considered as a critical run-time problem. We propose a multi-agent adaptive architecture to handle dynamic reconfigurations and ensure the correct execution of the concurrent real-time distributed tasks under energy constraints. The proposed architecture integrates a centralized scheduler agent (ScA) which is the common decision making element for the scheduling problem. It is able to carry out the required run-time solutions based on operation research techniques and mathematical tools for the system's feasibility. This architecture assigns also a reconfiguration agent (RA p ) to each device p to control and handle the local reconfiguration scenarios under the instructions of ScA. A token-based protocol is defined in this case for the coordination between the different distributed agents in order to guarantee the whole system's feasibility under energy constraints.info:eu-repo/semantics/publishedVersio
Balancing and scheduling tasks in assembly lines with sequence-dependent setup times
The classical Simple Assembly Line Balancing Problem (SALBP) has been widely enriched over the past few years with many realistic approaches and much effort has been made to reduce the distance between the academic theory and the industrial reality. Despite this effort, the scheduling of the execution of tasks assigned to every workstation following the balancing of the assembly line has been scarcely reported in the scientific literature. This is supposed to be an operational concern that the worker should solve himself, but in several real environments, setups between tasks exist and optimal or near-optimal tasks schedules should be provided inside each workstation. The problem presented in this paper adds sequence-dependent setup time considerations to the classical SALBP in the following way: whenever a task is assigned next to another at the same workstation, a setup time must be added to compute the global workstation time. After formulating a mathematical model for this innovative problem and showing the high combinatorial nature of the problem, eight different heuristic rules and a GRASP algorithm are designed and tested for solving the problem in reasonable computational time.Peer Reviewe
Dynamic Scheduling for Maintenance Tasks Allocation supported by Genetic Algorithms
Since the first factories were created, man has always tried to maximize its production and, consequently, his profits. However, the market demands have changed and nowadays is not so easy to get the maximum yield of it. The production lines are becoming more flexible and dynamic and the amount of information going through the factory is growing more and more. This leads to a scenario where errors in the production scheduling may occur often.
Several approaches have been used over the time to plan and schedule the shop-floor’s production. However, some of them do not consider some factors present in real environments, such as the fact that the machines are not available all the time and need maintenance sometimes. This increases the complexity of the system and makes it harder to allocate the tasks competently. So, more dynamic approaches should be used to explore the large search spaces more efficiently.
In this work is proposed an architecture and respective implementation to get a schedule including both production and maintenance tasks, which are often ignored on the related works. It considers the maintenance shifts available.
The proposed architecture was implemented using genetic algorithms, which already proved to be good solving combinatorial problems such as the Job-Shop Scheduling problem. The architecture considers the precedence order between the tasks of a same product and the maintenance shifts available on the factory.
The architecture was tested on a simulated environment to check the algorithm behavior. However, it was used a real data set of production tasks and working stations
Event processing in web of things
The incoming digital revolution has the potential to drastically improve our productivity,
reduce operational costs and improve the quality of the products. However, the realization
of these promises requires the convergence of technologies — from edge computing
to cloud, artificial intelligence, and the Internet of Things — blurring the lines between
the physical and digital worlds. Although these technologies evolved independently over
time, they are increasingly becoming intertwined. Their convergence will create an unprecedented
level of automation, achieved via massive machine-to-machine interactions
whose cornerstone are event processing tasks.
This thesis explores the intersection of these technologies by making an in-depth analysis
of their role in the life-cycle of event processing tasks, including their creation, placement
and execution. First, it surveys currently existing Web standards, Internet drafts,
and design patterns that are used in the creation of cloud-based event processing. Then, it
investigates the reasons for event processing to start shifting towards the edge, alongside
with the standards that are necessary for a smooth transition to occur. Finally, this work
proposes the use of deep reinforcement learning methods for the placement and distribution
of event processing tasks at the edge. Obtained results show that the proposed
neural-based event placement method is capable of obtaining (near) optimal solutions in
several scenarios and provide hints about future research directions.A nova revolução digital promete melhorar drasticamente a nossa produtividade, reduzir
os custos operacionais e melhorar a qualidade dos produtos. A concretizac¸ ˜ao dessas promessas
requer a convergˆencia de tecnologias – desde edge computing à cloud, inteligência
artificial e Internet das coisas (IoT) – atenuando a linha que separa o mundo fĂsico do digital.
Embora as quatro tecnologias mencionadas tenham evoluĂdo de forma independente
ao longo do tempo, atualmente elas estĂŁo cada vez mais interligadas. A convergĂŞncia destas
tecnologias irá criar um nĂvel de automatização sem precedentes.The research published in this work was supported by the Portuguese Foundation for
Science and Technology (FCT) through CEOT (Center for Electronic, Optoelectronic and
Telecommunications) funding (UID/MULTI/00631/2020) and by FCT Ph.D grant to Andriy
Mazayev (SFRH/BD/138836/2018)
Allocation of Heterogeneous Resources of an IoT Device to Flexible Services
Internet of Things (IoT) devices can be equipped with multiple heterogeneous
network interfaces. An overwhelmingly large amount of services may demand some
or all of these interfaces' available resources. Herein, we present a precise
mathematical formulation of assigning services to interfaces with heterogeneous
resources in one or more rounds. For reasonable instance sizes, the presented
formulation produces optimal solutions for this computationally hard problem.
We prove the NP-Completeness of the problem and develop two algorithms to
approximate the optimal solution for big instance sizes. The first algorithm
allocates the most demanding service requirements first, considering the
average cost of interfaces resources. The second one calculates the demanding
resource shares and allocates the most demanding of them first by choosing
randomly among equally demanding shares. Finally, we provide simulation results
giving insight into services splitting over different interfaces for both
cases.Comment: IEEE Internet of Things Journa
- …